I don't understand this. In supabase, the default is to turn on RLS for new tables. If you turn it on and have no policy set, no user can fetch anything from the table.
You have to explicitly create a read-all policy for anon keys, and with no constraints, for people to get access to it.
The default is secure.
If you turn off RLS, there are warnings everywhere that the table is unsecured.
The author goes on to compare this with PocketBase, which he says you "have to go out of your way" to make insecure. You have to go out of your way with Supabase, as well!
I wonder if the author tested this? I do agree that some third party website builders who use supabase on the back end could have created insecure defaults, but that's not supabase's fault.
More likely reason is that Supabase is a BaaS. Between client and DB there is no backend for secret management. So RLS is the only way to directly create API on the DB.
I’m not sure anyone’s scared off by this. It’s more that it’s more intuitive to declare your user queries (like Meteor did or how GraphQL works) than to reason about RLS.
It’s not about being scared off, I’m simply challenging the notion that Supabase is secure by default. It depends on your definition of secure, since everyone has a different threat model, but the above thread demonstrates that probably a good chunk of people would say No, it’s not actually secure by default. Being scared off would be probably the best possible outcome over the current situation which is “we don’t really have a good story to tell about whether this is secure or not”.
The fact that it takes a whole thread of conversation to even unwrap whether the default approach they took is good enough is a strong signal to me that it isn’t, because that level of complexity in the implementation often implies a model with a large enough attack surface with weaknesses that can be exploited without too much effort
> For security, the Auth schema is not exposed in the auto-generated API. If you want to access users data via the API, you can create your own user tables in the public schema.
People are using LLMs to generate apps and it's easy for non-technical people to miss this stuff. The blog post mentions https://lovable.dev/ becoming a $300M company, which uses Supabase by default and basically generates React SPA's with no true backend. But random people won't understand this distinction and will want to create full real apps. Doing this serverless is tricky and requires a lot of careful thought to do right.
Lovable is not going to tell them to use a proper auth service or fully secure their data. One Lovable project I looked at had generated an entire custom JS Markdown parser instead of using react-markdown, for example.
I had to double take back to the article after reading this - it actually said $330M (raised at
$6.6B valuation). AI investment has been crazy enough I would have actually believed it though!
I don't think you did fix it, you say "becoming a $300M company" but it's actually a $6.6B company, for which we'd be looking at valuation not amount raised.
I asked claude to build a system that involved parsing some dates and addresses and rather than using a library it wrote hundreds of lines of regexes and term lists ('st', 'street', 'dr', 'drive', 'ave', etc) to match every test case I gave it. Lesson learned.
Now, "non-technical people" should not ever by themselves put anything on the Internet that handles things like names and passwords.
It's bad that some folks want to make money on such people doing it anyway, which means they're not very nice and should get help to correct their ways.
My experience is watching a colleague use lovable which will mostly ignore security. Sure, if you prompt it the system will do something which seems correct, but it will also happily undo that as well.
eg I was trying to help her set up a webhook listener, and it undid our efforts.
These tools seem incapable of building software in the hands of users who don't understand security already.
> These tools seem incapable of building software in the hands of users who don't understand security already.
These tools are for augmentation of skills, not for wholesale "imma a programmer now", which a lot of people seem to think. And to be honest, lots of companies are selling that "experience" too, even though they know it isn't true, a bit shit.
It's definitely pushed as not needing an engineer.
My colleague now understands why unit tests, after watching subsequent development regularly break previous work. Lovable doesn't support them. And I don't want to touch this codebase because I don't want to own it.
Supabase doesn’t make a public users table by default. The user schema is in auth and secured. The problem is that unskilled developers bypass those controls out of convenience and put data into Public without RLS. Even the Supabase docs warn against this.
The point is that why they even have to make new users table? Something is driving them in this direction and as a counterexample you have Pocketbase where you don't have to.
To store application-specific data about users. The Supabase doc or examples show this. Where else would you put such data?
But what the docs don't cover is the provided Users table. Missing documentation is why I gave up on Supabase; and the Users table was one of the first problems I encountered. I could find no details on what to expect in each column at any given time.
Upon creating a new user, values get set in this table for no apparent reason. So if your application depends on knowing the verification status of a new user (for example), good luck... Supabase claimed every user was verified upon creation.
The auth schema is intentionally not exposed to the rest api for security reasons. You need to use an auth hook to put data where you need, or an RPC with appropriate privileges, and of course RLS on any tables.
After seeing the responses, I believe that this is more evidence of the fact that Supabase is easy to work with (and thus attracts people who have NO IDEA what they’re doing), and less an issue with Supabase security.
It’s even worse than No Idea what you are Doing. One can, as has been alluded to in other comments, be a completely naive rube who is using Supabase under the hood with v0 or Lovable and not have any idea that you’re even using it or that it exists at all.
The problem is that people just really do not comprehend what the "public" schema means in supabase. My guess is that that they think it means "default" or something along those lines. If you read the supabase documentation, you can clearly see that it says "your database's auto-generated Data API exposes the public schema by default", but to truly understand that, you need to understand what the data api is and how it relies on rls. For people first coming to supabase, they are probably either new devs, or they think of the db as a backend service that has application-layer authentication in front of it.
Interesting. That would have surprised me if I was a supabase user. I’m used to tossing everything into the public Postgres schema simply because it’s the default schema, and for many small apps, that’s all you need. Supabase should really rethink publicly exposing the default schema without explicit consent from the developer.
That is why in https://github.com/Qbix/Streams the default for all streams is PRIVATE. And people can choose what to open up explicitly. We support access templates, mutable access, and inheritance, roles, even participant roles and custom permissions. But the default is private, and all that is machinery on top of it.
Supabase is great if the goal is insecure, incredibly slow postgres. Selfhosting it is also painful with ~10 separate containers, while supabase's own offering has downtimes that won't appear on their status page.
Only thing it actually makes easier is auth. Other stuff just becomes harder to maintain. A simple springboot Java app, especially with basic boilerplate implemented with llm help, will last a long time, be cheap+simple to host, easily extensible.
It can go wrong. I had a horrible experience with StackGres. I read a lot of positive things about CloudNativePG though.
I can see where people with startups are coming from not wanting to manage database plumbing so they can focus on real business tasks. I think that's fine as long as there is a path to self-host after some growth. I might do some event-sourcing myself so that databases are effectively materialized views easy to add and remove.
We're constantly striving to improve the user experience and the quality of StackGres. Would you mind sharing some feedback as to what made your experience not good with it?
Did you join the Slack Community (https://slack.stackgres.io/) to ask if you were facing some trouble? It always helps, even if it is just by sharing your troubles.
(If you'd like to share feedback and do so privately, please DM on the Slack Community)
I did try slack. Maybe the problem is it was launched much too early. A certificate expiry issue caught me out because there wasn't an automatic process on this version to roll them over. Ironically a single database instance would have been much much more stable. I upgraded but this didn't bring up the database, restoring through the portal failed, so I had to create a new PG cluster to get my site up and I never ended up recovering the data as the process was very tedious involving PVCs rather than just pointing to my bucket.
The ratio of open to closed issues on the repo is much worse than CNPG so I would simply start there.
Thank you for your feedback. I'm trying to extract possible improvement actions from your comment, and here are my thoughts.
That certificate expiry issue was unfortunate, but was resolved (if I'm not mistaken) a couple of years ago.
StackGres is just a control plane, your database is as stable as a standalone one. StackGres itself may fail and it won't affect your database, it's not on the data plane. Indeed, it has a feature to "pause it" if you need to perform some manual operations (otherwise everything is automated).
There are procedures to reconstruct a database from PVC. It's arguably tedious, but should be much simpler than running a Postgres pod without the help of an operator like StackGres.
As for the ratio of issues: most of the issues that we get are feature and/or extensions requests, and certainly we can't tackle them all. Most, if not all, outstanding issues are addressed within a reasonable time frame. Is there any particular issue that would itch you that is open? I'd be happy to personally review it. Yet, there are as of today more than 2K closed issues, I won't call that a small number.
I'd also weight the importance of issues, like the split brain that CNPG suffers [1] and that apparently won't even be solved. StackGres relies instead on the trusted and reputed Patroni, which is known NOT to risk split brains that could lead to severe data loss.
I think people with startups just don't care. I had an interview with a startup the other day, and the interviewer said they were considering using v0 for their front-end. I really want to be wrong here, but so far it feels like all those startups are there to just take VC money for themselves and die.
Not really. Maybe for consumer. But there are many kinds of b2b infrastructure businesses that I can build and launch myself where I wouldn't want to expose myself to risk of day-long outages (for either reputational or as competitive disadvantage of having no HA story), such as anything to do with payment gateways, API gateways, AI proxies or other AI infrastructure - anything where client services would experience critical outages if your service goes down... Lots of these businesses are started without VC investment or big money from day 1.
Luckily now with solutions like Yugabyte, we can achieve enterprise-grade HA without high cost or high maintenance complexity.
You should not run a payment gateway on an inexperienced team. Start with something with lesser risk and then introduce the team to things like load balancers, keepalived, clustering and so on over time.
An hour of downtime is a lot once HA is something to invest in, and the first thing you need to do when there's an incident is to tell your customers what you're doing about it and the second thing they want to know is whether it will happen again. Since I don't know how Yugabyte works I'm not sure about the degree of lock-in, but preferably you should have an incident process where you at minute ten or so of downtime boot load balancing with a customer facing message at another infra provider and update DNS records, then start to rebuild the system there in parallel with the main incident response.
Firebase seems to suffer a similar problem of people not setting permissions right. The only major difference is that they seem to steer devs pretty aggressively to Google auth which won't leak password hashes.
While in theory your API can be the database it seems like a footgun for the inexperienced and AI.
One thing I find about these "all in one" platforms is that they tend to lure people into a sense of "wow this is easy to use" such that they forget to check security, assuming it's covered.
This is one reason why Firebase was such a gold-mine for security researchers: everyone just forgot about security when they forgot about their backend.
Are you saying that because you fundamentally just don’t believe the db is a good place for auth, or because these low-code frameworks tend to roll it in and as such you see a lot of low quality implementations of auth from these systems simply because using them is within reach of someone who has no idea what they are doing?
To me it’s important to make this disambiguation. One take says that auth in db itself is a problem. The other take says “auth in db is a symptom of low code garbage”
FWIW firebase auth and firebase DB are two separate things, and you can use them completely separately. However "Firebase" is a PaaS so I see how it gets confusing.
Fair call out but if I am a firebase customer, as I have been in the past but less frequently so, I treat them as a singular entity. In other words, there’s no situation I would use firebase and not use its auth, because the reason I might use firebase is Because Of the auth, not In Spite Of. There’s no world for me where firebase is the preferred option that doesn’t use auth, the integration like that is literally the only reason I would ever consider ClosedSourceOwnedByGoogle over alternatives
I like to separate concerns. Unix philosophy and all that. That was the primary concern on my mind when writing my comment above.
I think the feature is there not necessarily because it’s the best technical idea but instead because of its ability to pull in less educated developers. That makes sense financially because there are fewer people out there with a higher degree of expertise. But from my perspective it shows that it’s not meant for me.
> I'm not going to blame the vibe-coding wave entirely.
As one vibe-coding's most fervent critics, I don't blame it at all. Amateur devs have been doing this for a decade and change with Firebase and other hosted datastores.
I got one of my first small jobs as a contractor because of an Android app doing this back in 2012!
I find that supabase is pretty good at warning you about these things in their project specific security advisories, but obviously you need to actually pay attention to them and then take action.
You have to explicitly create a read-all policy for anon keys, and with no constraints, for people to get access to it.
The default is secure.
If you turn off RLS, there are warnings everywhere that the table is unsecured.
The author goes on to compare this with PocketBase, which he says you "have to go out of your way" to make insecure. You have to go out of your way with Supabase, as well!
I wonder if the author tested this? I do agree that some third party website builders who use supabase on the back end could have created insecure defaults, but that's not supabase's fault.
reply