a few months ago one of customers migrated away from supabase and they wrote a blog post about it. That blog post appeared here[0] on hacker news. many of the issues they encountered were related to local development. we made several promises to improve based on that feedback and the various comments in the HN thread
today’s launch delivers on many of those promises. We’ve added better support for database migrations, seeding, backups, debugging, and documentation.
we have a lot of work ahead, this is just the first step. our next major step forward is “branching”, which we’re rolling out today for development partners and alpha testers.
we’ve coupled the branching functionality to GitHub for now. whenever you create a new PR we launch a new instance, run the database migrations in your version control, and seed the database for reproducible test environments. we’re using Firecracker[1] for every preview environment. This environment automatically pauses when it’s not in use. we’re seeing some very impressive startup times, even though we’re stuffing a lot of services inside the VM. We looked at making full-production clones but decided against that for now until we have a robust strategy for anonymizing production data and mocking out calls to external services. Ultimately we want to offer both options, it’s just easier and safer to start with seed data.
since supabase offers a few services beyond the Postgres database, we still have a few questions to work through with our alpha testers. for example, we also store images/videos/files for our customers. Do these need to be anonymized in preview environments? we don’t have all the answers yet, but we’re moving in the right direction. As hard as it was to have a customer migrate away so publicly, I’m proud of the work the team have done to improve on feedback
I checked a little bit ago but I might have missed it so please correct me.
Are there plans to expand self-hosting support? The migrations are a big step forward.
Are you intending to fill the roll of being a framework akin to a "super django" type of deal? Again the migrations help a ton, but I've been hesitant to use Supabase for random projects because I don't want to rely on the platform, and I don't want random people on github who want to try or contribute to require a supabase account.
I'd love to use it more as a modular ORM for miscellaneous projects instead of the current "hosted platform", currently none of the tutorials (or github projects) seem to explain this route at all.
I think you actually do work for this purpose, and I think the docs mostly cover the bare minimum for self hosting, and I understand your business kinda relies on the hosted platform, but I'd love to see further tutorials and thorough explinations of all the features - currently some like the AI features aren't really explained if they work in the self-hosted or not, or if you have to do anything special for that.
EDIT: Also I do appreciate your business being opensource, and contributing to postgres so much!
Sorry for the rambling, and I apologize if I'm blind and missed some obvious docs.
> Are you intending to fill the roll of being a framework akin to a "super django" type of deal?
We don't plan to replace any specific framework. We simply want to make Postgres easier to use. You can use Django (or any other framework) and Supabase together. We provide some additional tooling on top, but we aim to make this tooling 100% compatible with other tools. As an example, here[1] is a change we made recently so that our Storage service works better with Clerk (a popular Auth service, which is a good alternative to our own Auth service). We plan to document this better - it was one of the promises I made in the OP.
> Are there plans to expand self-hosting support?
we made a few updates[0] this week to improve self-hosting based on the common feedback. if there is anything missing just let me know and i will do another round of improvements.
> currently some like the AI features aren't really explained if they work in the self-hosted or not, or if you have to do anything special for that.
nothing is feature-gated so everything works on self-hosted. That said, I agree that we can be better at explaining the self-hosting. We are putting a lot more effort into improving our docs in general. For self-hosting we can certainly be clearer about the boundaries where you are responsible (eg, you need to take care of your own backups, the AI features will require external acconts). we're working on this, but feel free to open issues where it's unclear we can address anything specific
Thank you so much for taking the initiative to learn and grow from that experience. My team is in the fledging stages of using Supabase to rebuild a mission-critical application for our company. We were initially disappointed with what we found in terms of CI/CD and local development but we liked Supabase enough for other reasons to keep moving forward with it. I'm really glad we did because I think this has the potential to be a major boon for the productivity and efficiency of building and maintaining our project over time. I get the impression that you're making really good calls as it relates to your roadmap, and I'm really looking forward to exploring these workflows and sharing what I find with my team. Thanks again!
> I get the impression that you're making really good calls as it relates to your roadmap
we receive feedback from a lot of channels so it's often hard to figure out what to build. In the early days it was about reaching feature-parity with other tools. now we have a bit more breathing-room to focus on "day 2" problems. I think our team is excited about this phase since it means we get an opportunity to build something new/innovative
After being burned by other DaaS providers (looking at you AWS Amplify/Cognito), I've been skeptical to use anything other than rolling my own solutions. Supabase updates does continue to impress, and I do really appreciate these improvements to local development.
I would love to hear what solutions you're choosing.
one of our goals is to provide only the tools/tech/features that you'd choose yourself or need to build to get started. If you're skeptical based on our technology choices then it's useful to receive that feedback
Unrelated to DBs, I've been thinking about trying to roll my own system for magic links recently. Even though Supabase has some of the lowest costs for MAUs, they're still too high if you're only using magic links, especially considering the related email rate limits [1]. I don't even know if I'm reading that right. Is it 4 auth related emails per hour by default?
I can run a Cloudflare Worker for $0.0000005 vs $.00325 for a Supabase MAU. Assuming it would normally take 2 Worker runs to generate and auth a magic link, a user that signs up and never comes back would cost me 3250x more if I use Supabase.
Not all users are equal and, for low value users that probably never convert to paid users, I don't need to give them a full blown user account with MFA, etc.. Magic link based auth is adequate for what I need and I don't want to pay between 300,000% (for Supabase) and 15,000,000% (for Auth0) markup above the raw compute costs for someone that signs up and never comes back. For a user that converts to a paying customer, I don't really care about the cost as long as I don't have to eat it for every free user I have.
I know there are other costs, and that the requirements for magic links are more complicated than at first glance, but those costs are relatively fixed in the context of magic links, right? If the only major ongoing cost is for email, where I'm basically expected to bring my own provider, the MAU cost for a user that only uses magic links feels like a bad deal.
This isn't just a Supabase issue either. The entire auth industry is similar. I need the simplest part of the existing solution, but I'm forced to pay, in both cost and complexity, for the complicated, expensive part of the solution that I don't need or want to use. Does that make sense?
> Is it 4 auth related emails per hour by default?
It's unlimited emails per hour, as long as you BYO SMTP provider. The default email service is only for testing, and not recommended for production. I usually recommen AWS SES or Resend[0] for unlimited emails
> This isn't just a Supabase issue either. The entire auth industry is similar.
Agreed - the industry prices on MAU, which isn't a great heuristic. for social websites, 1M users might be a low number. For B2B SaaS even 1,000 MAU could be high. For Supabase, we simply try to be fair and transparent (and we're an order of magnitude cheaper than other Auth providers). There are a lot of other things that you're _not_ paying for which we have to price in - regular security audits, zero-day support, etc.
> There are a lot of other things that you're _not_ paying for which we have to price in - regular security audits, zero-day support, etc.
Yeah. I was hesitant to toss examples of actual costs in there because I knew it wasn't really a fair comparison, but I wanted to try to make my point even though I don't have the ability to calculate the real costs.
If I had to sum it up in a way that translates into a good strategy for building mutually beneficial relationships, I'd say "don't profit off my losses". I want a partner like relationship, but the only thing anyone is currently offering is for me to be a customer. I have to make all the predictions on conversion rates, etc., so I'm taking all the risk while the platform owner (ie: you) makes a (high-margin) profit off every user I have, regardless of whether or not I'm generating revenue from that user.
> for social websites, 1M users might be a low number. For B2B SaaS even 1,000 MAU could be high
I would say it makes sense to bucket users into categories and split the feature set accordingly. I think that's what Firebase does [1A]. I think basic login types (magic links, password, social) are free and you only pay for users that need their identity platform.
However, there are a few problems with Firebase IMO. First, I don't like "free". It means my costs aren't realistic and I need to assess the risk of that offering disappearing. IMHO that's just another layer of complexity and risk and I'd rather pay fair value from the start. The second problem with Firebase is that it's going to take a decade of culture change at Google for me to trust any of their products, especially something that's "free".
Back to Supabase, what do you do if you want to add a significant feature that makes the current auth pricing unsustainable? Do you increase the price a tiny bit? What if I have a million free users and don't need that feature for them?
> and we're an order of magnitude cheaper than other Auth providers
This is a little unfair on my part because I don't know the true costs, but if that means you're only charging me 100x the underlying costs vs everyone else charging 1000x, that doesn't make it good value for me, does it?
I'd rather categorize my users and pay accordingly. I know this may not be realistic in terms of creating too many SKUs, but just to make the point (from my perspective)...
1. Free users get magic links. Pricing should be tied to real costs and be commodity like. Minimal cost (to me) is important. Long term, stable, predictable pricing is important. This comes out of my pocket, so I don't want you having a large margin on it and I don't want it fluctuating because small changes can have a large impact on me if I have a lot of free users.
2. Convertible users get passwords, social logins, TOTP, security keys, etc.. Basically they get anything that doesn't have external costs (to you). I'd be willing to subsidize these a bit, but not anything crazy.
3. Paying users get SMS, etc.. Basically they get things that have external costs (to you). I'd pay a large premium for these users and I'd be willing to take on all the external costs in addition to that premium (ex: I pay for all SMS costs).
4. B2B users get any B2B features and my (inexperienced) opinion is they fall into a category where the cost (to me) doesn't matter much.
The other thing that I don't like about having a uniform cost per user is that I know the cost per user isn't uniform and, if my costs aren't a function of your costs, that means you're taking on some risk in the prices you've set. What if your overall prices are too low even though my specific usage is already profitable? Do I have to endure a price increase?
Again, this is uninformed because I don't have a decent knowledge of the true costs, but for my use case (magic links only, bring your own email) the prices feel 100x too expensive, but for a B2B use case they feel 100x too cheap.
I'm sure it's difficult to accommodate all use cases in a way that makes everyone happy, so hopefully my perspective is useful feedback.
I just want to share that this is a great write-up. I'm in the middle of Launch Week, so I don't have a lot of head-space to digest it all, but I promise I'll come back to this and give it the time it deserves next week.
really impressive how you responded to that criticism - listening to feedback and then building out a full product suite. looking forward to try out everything here, particularly branching, because just yesterday i was looking thru the docs recommending staging/dev/prod instances and wondering if there was a better way!
In the original posting from Val Town I didn't quite catch why in lieu of local dev they did't use dedicated dev dbs (possibly per developer) in the Supabase service.
Anyone have perspectives on pros and cons of local dev vs cloud dev environments with Supabase?
I don't think we have any positions open currently unfortunately, although we're always looking for talented SREs & support to keep up with the platform growth. feel free to reach out (my details are in my profile)
Cool to see dev-ex improvements around local Postgres testing. At Graphite we use pg-mem for fast unit tests, but it's not ideal. It's extremely fast, but certain advanced queries aren't supported. Curious about what others do for unit testing Postgres operations?
Agreed, better Postgres local testing would be amazing!
I've heard about testcontainers [1] before, which can be used for Postgres. I've used it a bit, but the Elixir library for it is still under development [2] so I haven't been able to try it at work. Elixir's Ecto library is pretty good at wrapping Postgres for tests though [3].
Huge congrats for shipping some of these features. I know many of us have been wanting them for a while. In particular, migration squashing will be super helpful. I'm a little iffy on the branching stuff, since I like the solution of having two separate Supabase projects to act as different environments (though that's certainly not as powerful as actual branching, but it's simpler to reason about). Really excited to try some of the things on this list that seem new today (several of them have been out for a while as far as I can tell?)
> I like the solution of having two separate Supabase projects to act as different environments
This won't be going away. Branching will just be another option
fwiw, I've also heard from a few enterprise companies that the git-based branching model isn't as suitable for them, because every other tool in their stack works in a prod/stage/dev type model, and there is no simple way to make it work with (for example) ~30 different environments
The branching model is really an "all-in" solution. It works particularly well if you're using something like Vercel/Netlify for your frontend. If you're using a serverside framework (Django, Rails, Phoenix, etc) then it's not as simple. That said, I think it's the way the world is moving, and with the advent of cheap VMs then it makes it very plausible even for these serverside frameworks
Something I'd like to see with local development and the Supabase CLI is timing around inserting seed data, handling triggers, default data. I ran into a bunch of issues getting a nice local dev setup. For example seeding data after migrations is not helpful (and will fail) if your latest migration is destructive - you want to seed data and then run the next migration.
For context, my local dev process is now as follows:
1. supabase db reset with seed.sql empty
2. run a preseed script that disables any triggers and removes default data that has been previously seeded in migrations
3. seed data
4. reenable triggers
5. execute any working migration files that I keep in a separate file
I've written a script that handles all this, so I have mostly solved this for myself - but this was mostly due to running into a bunch of challenges setting up my local env to work well. Very open to general comments on approach too - perhaps there is a simpler way
> if your latest migration is destructive - you want to seed data and then run the next migration.
We have added supabase migration up [0] command that runs only pending migrations (ie. those that don't exist in local db's migration history table). You can use that to test destructive migration locally with data from seed.sql.
After testing, you want to update your seed.sql with a data-only dump [1] from your local db. That would make CI happy with both the new migration and the new seed file.
> 2. run a preseed script that disables any triggers and removes default data that has been previously seeded in migrations
It sounds like the default data is no longer relevant for your local development. If so, I would suggest running supabase migration squash [2] to remove the default data.
To disable triggers before seeding data, you can add the following line to seed.sql [3]
I like being able to call supabase db dump (data only) and not touch code in the file at all - I get that adding SET session_replication_role = replica; is one line, but still my preference is to avoid. But like I said I already disable triggers ahead of the seed script running.
I currently use supabase db reset quite frequently as I make changes in development. Using supabase migration up would mean moving the latest migration out of the migrations folder, running supabase db reset, moving the file back in and then calling supabase migration up. Which is not the worst idea, I'd still be looking to automate those steps with my own script atm tho.
Re: squash I have been a little cautious to use it since I first noticed it in the CLI docs as I wasn't really sure what the actual outcome would look like
If I have something like this in a migration script:
--set initial permissions
INSERT INTO rbac.permissions(name)
SELECT unnest(enum_range(NULL::rbac.permission_name))
except
SELECT name
FROM rbac.permissions;
> Using supabase migration up would mean moving the latest migration out of the migrations folder, running supabase db reset, moving the file back in and then calling supabase migration up.
We can definitely do a better job here. I'm adding support for db reset --version flag [0]. This should allow you run migration up without moving files around directories.
> I wasn't really sure what the actual outcome would look like If I have something like this in a migration script
Agree that we can do a better job with the documentation for squash command. I will add more examples.
The current implementation does a schema only dump from the local database, created by running local migration files. Any insert statements will be excluded from the dump. I believe this is not the correct behaviour so I've filed a bug [1] to fix in the next stable release.
For the Observability tools for Postgres category, are there plans to bring some of this information into things like trace data via OpenTelemetry? Would love to capture this info continuously instead of needing to dig in with a CLI tool when monitoring notes something is awry.
yes otel across all of Supabase in on our radar for sure. we just added ingest support for otel payloads to Logflare (docs coming soon) so when we have that you'll get them on the platform and locally.
if you haven't seen the metrics endpoint we do have an endpoint you can scrape for all your Supabase metrics, and we just improved the example repo quite a bit on how to ship those somewhere: https://github.com/supabase/grafana-agent-fly-example/
I hate to be this guy, really. I would like to adopt Supabase in my company, but I cannot yet.
I commented on a HN post almost a year ago about how hard is to do custom Auth with Supabase. I still haven't find a good solution about it. For example, LDAP Auth is quite crucial in most enterprise settings, yet I have no idea how to do it with Supabase. I can find a workaround for PostgREST by putting a secondary API written in some other language and fiddling with reverse proxies. But how to do with Supabase, such that all other services (realtime,...) works nicely? Is it so hard to provide a function that accept a custom strategy given the HTTP request data?
I created an issue[0] almost a year ago on Supabase, which was transferred to Gotrue. I even provided some code examples from Laravel. Even if it is not specifically for LDAP, make some API available to do so, please.
> Even if it is not specifically for LDAP, make some API available to do so, please.
Based on the issue (title and comment), it seems that you have asked specifically for LDAP support rather than a generic API.
Feel free to share some more details in the github issue, so the Auth team can figure out how best to support you. It looks like they have followed up asking for the use-case and they are just waiting for some clarifying details.
that's on the roadmap - definitely one of our most requested Auth features. I don't have a timeline yet, but I know it's somewhere at the top of the (kanban) list for the Auth team
these newly announced features are all inside the CLI, so i'm not how else to show an example project. After installing the CLI, you can run `supabase start` and it will pick up all the Migrations[0]. The CLI starts a local dashboard to see the logs[1]. After pushing to production you can run a backup[2], etc.
from your edit above, perhaps you were just looking for the docs but let us know if there is anything else you need
they should be compatible, so I don't think you need to see it as an either/or. If you choose to use both however, then you should use drizzle to scaffold your database.
Our clients will work with the drizzle-generated tables (since they use PostgREST).
I love the innovation coming out of Supabase and want use it, but I feel like the team ought to focus on polishing what is already there before investing so much in R&D.
Make the platform SDKs to feature parity, make RLS easier to use, improve auth (for ex add anonymous users and native login), polish file uploads, etc.
Supabase has a ton of potential, so hopefully this is taken as constructive feedback!
> focus on polishing what is already there before investing so much in R&D
fwiw, this post is exactly that. Everything in this release is an improvement to existing functionality within the CLI, as response to similar feedback we received here[0].
There are a lot of "behind the scenes" improvements which don't get visibility on HN - only new features tend to get upvoted so I think we have a reputation which isn't representative of our day-to-day focus.
That said, we know there are a lot of shortcomings remaining (as you point out, and the comments below). Please do continue to share details on your experience, in the github issues preferably, so that we can focus on the most important tasks first.
I agree with all of these points. I really enjoy using Supabase, but it does feel like 95% of the effort is on releasing new features.
The mentioned migrations for example, don't work if your database is not very simple (e.g. triggers typically won't work because the dependency order is wrong, custom types are not supported, etc.), and this has been the case for some time.
The documentation for the most basic functionality is also quite poor and requires digging through the TS source for detail. For example, here's the JS lib auth signInWithPassword function:
Log in an existing user with an email and password or phone and password.
Requires either an email and password or a phone number and password.
Parameters:
credentials (required) SignInWithPasswordCredentials [no link to what this is]
That's all. There is no explanation of what data/error might return, error conditions, whether it can throw, etc. Looking at the source, the are a variety of additional parameters (user meta data, captchaTokens ) that are not mentioned at all. The site has various articles, howtos, videos etc. that explain different bits of functionality, but the core reference is incomplete and it's a pain to dig through blog posts to discover basic functionality.
To be clear, I think it's a great product, and the open source aspect and great communication from the team is a big plus, but I do think more time could be spent getting the basic product right before chasing 100s of new features.
100% agree... I've been using super base for a new project for a few months and while overall I'm impressed, there are just too many bugs and partially rolled out features. Here's my current gripe list:
- features/ui missing from local development
- more secure triggering of edge functions from database (currently have to hardcode key in SQL)
Hey @cjonas, I'm a developer on Supabase Edge Functions team. We do have plans to improve the current database trigger behavior. Will share more updates on this in the coming months.
Can you explain what do you mean by template URLs? Do you mean route params like `/v1/functions/users/:id`? If so, you can use a framework like Oak[1] to handle them. Edge Functions will make the full path including querystring available to the router.
Source maps, is it broken during local dev or when you deploy the function? Also, by broken you mean in a stack trace the file / line numbers aren't accurate?
> We do have plans to improve the current database trigger behavior. Will share more updates on this in the coming months.
Excited to hear more! Being able to trigger functions from DB & cron triggers without having to hardcode a secret (which causes them to end up in migrations files) will be a huge improvement.
> Can you explain what do you mean by template URLs?
Oh, I had no idea that the functions routes were "wild card"! I don't think that's mentioned anywhere in the documentation, btw.
> Source maps, is it broken during local dev or when you deploy the function?
"Source maps" (does deno actually use source maps?) ARE broken as well. EG: The line numbers in runtime don't line up with the function code in the IDE or even that is in the docker volume.
a few months ago one of customers migrated away from supabase and they wrote a blog post about it. That blog post appeared here[0] on hacker news. many of the issues they encountered were related to local development. we made several promises to improve based on that feedback and the various comments in the HN thread
today’s launch delivers on many of those promises. We’ve added better support for database migrations, seeding, backups, debugging, and documentation.
we have a lot of work ahead, this is just the first step. our next major step forward is “branching”, which we’re rolling out today for development partners and alpha testers.
we’ve coupled the branching functionality to GitHub for now. whenever you create a new PR we launch a new instance, run the database migrations in your version control, and seed the database for reproducible test environments. we’re using Firecracker[1] for every preview environment. This environment automatically pauses when it’s not in use. we’re seeing some very impressive startup times, even though we’re stuffing a lot of services inside the VM. We looked at making full-production clones but decided against that for now until we have a robust strategy for anonymizing production data and mocking out calls to external services. Ultimately we want to offer both options, it’s just easier and safer to start with seed data.
since supabase offers a few services beyond the Postgres database, we still have a few questions to work through with our alpha testers. for example, we also store images/videos/files for our customers. Do these need to be anonymized in preview environments? we don’t have all the answers yet, but we’re moving in the right direction. As hard as it was to have a customer migrate away so publicly, I’m proud of the work the team have done to improve on feedback
[0] https://news.ycombinator.com/item?id=36006018
[1] Firecracker: https://firecracker-microvm.github.io/