This is a webserver for Deno. It is MIT licensed, written in Rust, and based on the latest Deno Runtime (1.32+). It can serve TypeScript, JavaScript, and WASM functions.
This one is important for local development and self-hosting. For local development, it ensures there is parity between development and production. For self-hosting, it means you can deploy and manage your Deno Functions on your own hardware. We have provided a Demo repository for deploying to Fly.io [1]. The runtime is not production-ready, but it is in a usable state. Note that we still use Deno Deploy at Supabase, and strongly recommend it for your own Deno functions.
In the medium-term, this will reduce our tech-stack by 1 service. We'll remove Kong (a reverse proxy) and replace it with the Deno runtime. I'm excited about some other possibilities that this provides for self-hosting - one that we talked about was the ability to bundle a SQLite file with your functions, for a fully-contained and globally deployed webserver.
I've been dabbling with deno for a couple years now, one of the most powerful features I'm excited about is BroadcastChannel[1] support, which works with --unstable in Deno Deploy.
One of the challenges I've run into is debugging timeouts in deno deploy, so I'm curious--
1. is BroadcastChannel supported in supabase self-hosted edge functions?
2. is there more tooling available to supabase edge functions to debug than currently in deno deploy?
---
if curious of specifics, this is my work in progress proof of concept, exploring using pathnames for channels, to get anyone on the same url path connected across regions through sockets and broadcast channel: https://github.com/tylerchilds/kickstart/blob/869506c9dae1e1...
[Supabase engineer & author of the blog post here.]
Thanks! Good Questions
1. We haven't enabled BroadcastChannel in the Edge Runtime. Mainly because we haven't found good use cases for it within Edge Functions (it does make sense as a way to subscribe within a browser client, but not sure how it fits with async/short-lived nature of Edge Functions). Curious about how you plan to use BroadcastChannel in Edge Functions?
2. As I've mentioned in the blog post Edge Runtime allows you to tweak the duration and memory available for an Edge Function (check the examples/main/index.ts in the repo too). Personally, I use this option to debug timeouts and memory issues on Edge Functions. We also plan to introduce better ways to profile your Edge Functions in the future.
I don't think my use-case is necessarily a good fit for edge functions. I am trying to achieve what Supabase realtime/multiplayer accomplishes, but generically. I participate informally with the https://braid.org IETF working group, which to over-simplify is CRDTs over HTTP with subscriptions.
I'm interested in web standards and that's what's drawn me to deno, so I'm super excited the more and more I see it being adopted. BroadcastChannel piques my interest because it is perfectly in that gray area of standardization-- it makes total sense on the client and we're on the cusp of discovering what that could look like for servers.
In deno deploy, all the instances of my service are able to be linked together by BroadcastChannel, which I'm viewing as a p2p-style architecture. Ultimately, I'm curious about how this works under the hood and if it would be possible to interoperate a BroadcastChannel between Deno Deploy, Supabase, and say a Raspberry Pi in my house.
I've gone on a bit of a tangent, but I think maybe I should get involved with the WinterCG, since now that I'm putting my thoughts to words-- seems like it fits their charter.
Yep, that sounds interesting. Feel free to share with us if you write a formal proposal. Would love to explore the options to provide better worker-to-worker communication options.
Congrats on the launch! I’ve been so underwhelmed with Lambda’s lack of forward progress lately and have been wanting to play around a bit with V8 isolates. This looks like a super-easy way to get a runtime spun up and handling requests on AWS.
The largest advantage currently (in my opinion) is with respect to cold starts. Lambda has gotten significantly better over the years, but cold starts are still a concern, largely because it’s difficult to anticipate the level of needed provisioned concurrency needed to mitigate it. Not having to think about that at all would be a large advantage (I’m ignoring Lambda SnapStart since it’s strictly limited to Java at this time).
It’s also the case that there’s just so much historical baggage within Node, mainly with regard to CJS. Developing against Deno at least presents the opportunity for us to move past that and improve DX.
These would boot faster since you don’t have to boot a whole OS, just a V8 VM. Downside is compatibility, your lambdas would have to be largely rewritten (especially if they rely on any other AWS services). It’s also probably more expensive since you’d have to keep a EC2 or something up running this.
Kinda wish Amazon would have an alternative runtime that was more like this. They kinda do with lambda@edge, but that’s only for CloudFront stuff, mostly.
Lambda@Edge is a great solution for the locality problem (CloudFront can just route to the nearest edge function, and despite certain headaches, it works well), but what it doesn’t help to solve is the cold start problem. In a sense Lambda@Edge works against it, as provisioned concurrency is only supported for ‘regular’ Lambda functions.
At a certain level of scale, the cost of spinning of an ALB Fargate task (which could run these servers relatively easily) ends up being relatively negligible. Although that’s certainly not true for every org.
One of the biggest annoyances with Deno deploy/functions is that there is no way to store any data. This would be very useful to e.g. cache an auth token, store a key/value pair, etc. See also: https://github.com/denoland/deploy_feedback/issues/110
Is any work being done to fix this? Or is this out of scope currently?
Yes, this is something we are exploring for Edge Runtime. One cool development is AsyncLocalStorage now works on Deno via a polyfill [0]. We may enable a single node store on Edge Runtime using this. Also, there are some interesting development in Supabase platform to enable Edge Databases, which could also be an answer to this.
Is Supabase targeted towards webdev "apps" or mobile "apps"? As a mobile dev I got excited to check out a Firebase + serverless alternative, but the docs seem very targeted towards webdevs and non-native apps.
I see there's a 1.0 Swift library that's available, would love if someone can chime in with their experience using that in an iOS app.
I'm working on several RN + Supabase projects and it just works !
There were a few kinks initially, like URL polyfil being missing from RN and the need for AsyncStorage alongside supabase.js but they've all been rectified by the team or made more clear using their Expo instructions in the docs — Although maybe it'd be clearer for people that the expo guide is for RN? People who haven't started using RN might not know the connection?
It's such a pleasant stack, we're experimenting currently with edge functions in place of an API in certain places, so far it's just to use Cloudflare Turnstile to insert contact form records into a database, but it's super trivial.
Hey!
I wonder, why CEO, not even CTO or other Principal Engineer come here in person. Is it cuz company doing so great that CEO has so much free time to spend it on activity of that sort or HN is consider so serious resource to spend time on it even for C level person..
We use Kong as our API Gateway, so it is responsible for authentication, rate limiting, routing, etc. By moving that to edge runtime, we simplify our stack since we already run a edge runtime container to run user functions.
The other motivation for doing this is that it makes our API Gateway programmable with user defined functions. For example, you could implement custom transformations for storage or augment the Authorization header with custom claims before calling Postgrest.
This is really exciting Inian. I’ve long felt that api gateways are one of the heavier and more complex parts of the modern infra stack and the work you’re doing to make this more self hostable is laudable!
Thank you for open-sourcing especially under MIT license. I am digging through the codebase and finding a lot of interesting things.
I am building another open-source project that is also a self-hosted deno runtime written in Rust, Windmill [1], where we enable to build all of your internal tool and infra (endpoints, workflows, apps) based on scripts (deno, but also python, go, bash). Instead of running one http server continuously for your function, we run it on demand which has its own set of challenges.
We are doing something pretty naive right now: we create a fork and call deno run [2]. It's decently efficient (25ms for a naive script e2e). We are familiar with deno_core and use it in other places to run javascript directly, but for typescript, deno didn't expose directly the root entrypoint as a lib so we had to fork it [3] and are now gonna be able to do the transpiling to js AOT and save the fork for sub 5ms script execution.
We also want to make some functions togglable as high-performance endpoints and for those we would want them to be spawned as http servers to save the cold start. I'm gonna investigate thoroughly the codebase, and thank you very much for having shared it.
This is a webserver built on the Deno runtime. Deno makes it easy to build a custom JavaScript runtime [1]. I am excited about extending this runtime to integrate it better with the rest of the Supabase stack. For example, we can modify the Deno filesystem API to read and write files to Supabase Storage instead.
just use a regular server for mid/larger sized apps.
i started my web dev journey with JAMStack, Vercel, the "edge". everything is easy as long as one only deploys a full-stack NextJS app. the moment other apps come in, just use a server deployed as VPS and avoid "edge runtime hell". edge runtime hell refers to "you can't do this (function with over 2MB payload), you can't do that (because not supported by X)".
EDIT: I implicitly meant with "deploy as VPS" -> deploy your server via Render/Heroku/CI-CD instead of serverless functions running on the "edge".
I'm coming around to building everything for a VPS from the outset. There's a lot of upside to VPSes, such as:
1. Can be purchased as a fixed cost, usually at a rate that's much cheaper than on-demand pricing, and especially serverless--this tends to only get better with time as competition keeps prices low
2. It's "just" a Unix/Windows/Mac box, so the issues with runtime constraints you mention are bounded differently (and often more favorably); serverless is also just a box, but the constraints tend to be more onerous and limiting and it's not usually accessible in the same way
3. With containers, it's trivial to move between providers, so the hardware itself becomes fungible
4. On containers, I'm having a great time shipping Docker Compose configs--this works really well for the scale of application I'm targeting while avoiding the dreaded complexity of e.g. k8s
5. There's decades of high quality tooling already built and battle tested which makes operating VPSes much easier; the fact you can SSH into the machine, for instance, has huge leverage as an solo person working on independent products
Going forward, I'm planning to skip edge compute altogether unless there's a really compelling reason to want it. I should also mention that when a VPS is paired with a CDN, you can layer on bits of "edge compute" where it's warranted; or, you know, use it to cache static assets close to your users. :)
All-in-all it's kind of a funny return to where I started ~20 years or so ago with web development.
> it's kind of a funny return to where I started ~20 years or so ago with web development
It's a journey I've been going on too.
All the new platforms and paradigms that have emerged have had an initial shallow aura of helpfulness that has drawn me in, but almost exclusively when digging in I've realised that they create more issues than they solve, and/or introduce limitations that aren't worth it and force me to write janky, overly complicated code to work around that isn't comprehensible even a few weeks hence.
Maybe the most notable exception to this is containers. But in a sense they're an abstraction over the very same VPS paradigm. So that makes sense. It's not something new or different, just the same thing but with some advantages (and disadvantages too, obviously).
If VPSs had the same marketing and maybe a bit more polished standard tooling I have a feeling it would quickly gain traction and simplify life for most people. As well as prevent vendor lock-in, of course.
It’s in the SaaS business model. The incentives, even for fully open source like supabase, is misaligned with self-hosting. Even if they’re super honest and trying to be helpful, their fully managed globally available offering is going to have very different needs than self-hosters.
I actually more like the model of having FOSS where the company behind it offers consulting instead, to build, deploy and operate the product for customers who lack the in-house expertise. It’s not perfect, but it helps align the incentives towards simplicity.
It's also just generally wrong to build a scheduler on top of the docker API. We have CRI for a reason, because everyone knows Docker is not going to be around forever. Certainly not the company. Maybe dockerd.
> usually at a rate that's much cheaper than on-demand pricing
This is an area that is legitimate swindling/inflation by hosted app providers (e.g. DO Apps, Heroku, Render, Fly, etc). Oftentimes the per-vCPU/memory price is inflated over the underlying cost, even relative to a rather expensive underlying provider like AWS; which they'll reasonably justify by saying that this is the value-add, yeah you pay more but its more managed. But: when you have underlying access to the VPS, you can host more than one process! Which, of course, they're oftentimes doing on their end to cut costs.
Serverless functions can legitimately fall into the "always cheaper" category. If you've got twenty apps that each get request volume in the range of dozens-to-thousands per month; you could host that on a $5/mo VPS, or you could pay a few cents for a FaaS (Lambda, GCP, Cloudflare Workers, etc, all priced in the same magnitude). But the price-to-scale chart of serverless functions is really weird; they do hit a point where they're more expensive than just, you know, running a web server on a VPS. That point isn't at a super low volume, but its not far off from where its something to think about for a typical software organization. If I had a personal project that hit that point, I'd classify it as a good problem to have.
I also feel endless frustration in how there legit isn't a single cloud provider out there that (1) offers a serverless functions runtime, and (2) gives you the ability to set a non-zero budget which turns off the world when you go over budget. Many offer free tiers with no credit card, and some are even generous (Vercel and Firebase are two good examples), but I won't build on a free tier. I want to pay you. So, you upgrade and give a credit card, and now you're liable for one fuck-up bankrupting you, or throwing you on your knees at the mercy of their customer support. The severity of this fuck-up ranges from "my GCP account does just use a VPS, but egress is unlimited, so the bill was a bit high this month" to "the new dev changed a lambda function to call another which called the first one, and our bill is now the GDP of a developing nation-state".
The vast majority of the managed infrastructure world is, unfortunately, B2B swindlers just trying to out-swindle each other, only possible because they're all buying from each other, constantly raising prices, finding new ways to bill their customers, and losing any grip on the true (extremely low) reality of their costs. Supabase is better than most. I really do appreciate releases like this one. I'd also add Cloudflare to my list of "good ones"; they've taken a hard stance against charging for bandwidth, and I think that single decision had controlled a ton of the incremental costs we see from their newer higher-level product offerings like Workers.
I have stuck with cheap VPS servers for as long as I can remember. It takes 5 minutes to deploy a full stack node.js app, along with a database - and I've yet to exhaust the resources on my VPS, even with all my side projects (production grade and hobby stuff).
Have always found it weird how so many heroku-style hosting providers charge _per app_, things get costly, quickly, when you have lots of small apps like I do.
Just yesterday I realised I'll need a database to store job queues for https://chatbling.net - ChatGPT helped me figure out how to install it, have it persist to disk, ensure it starts up with the system etc. It's nice to not be fearful of unexpected charges hitting my card.
To anyone reading, even if it's just for learning, every now and then, skip the vercel/fly.io/netlify/planetscale/upstash/render/railway whatever cool hosting provider is out, and give a $5 VPS a try.
I think I want to do the same. Can you describe your stack please? How much downtime do you get and how do you deal with app updates and system updates to the vps machine itself? What about monitoring?
I’ve been using Cloudflare workers for a while now and I have to disagree.
There are entire classes of problems I no longer worry about with Workers and can just focus on building. My search history is a reflection of that. I’m no longer looking up “how do I put this thing in a Jails or container to limit exposure?” “how do I properly secure an SSH server?” “what is the magic incantation in my NGinx configuration to get an A+ SSL rating?”
I also spend less time thinking about ongoing maintenance, automating rotation of SSL certs, keeping system packages up to date, doing a dist-upgrade every few years, maintaining Terraform files to rebootstrap a server from scratch, thinking about hot/cold redundancy, etc.
With (some) serverless providers an absolutely massive slice of the responsibility of building a web application is pushed across the API and vendor boundary and is someone else’s responsibility.
For me at least, this is huge. I have a handful of clients that don’t have any full time engineering staff, and being able to push the cost of ongoing maintenance down is the only thing that allows them to afford building a custom application.
I'm afraid you're trying to make system administration look much harder than it actually is. As an example, adding good defaults to your nginx config is automated by certbot, or you could use caddy. You could run your apps statelessly by containerizing your applications or by simply writing Ansible playbooks and then not have to worry about upgrades - you simply deploy the application on the new server and spin down the old one.
As someone who has been doing this for some years now, it is not simple.
To be fair, you can write a simple Ansible playbook if Ansible isn't doing much for you. But if you're using Ansible to manage things which are themselves not simple (like "just install a Node runtime please") you are at the mercy of whatever shell script Ansible eventually ends up calling.
I've been through Ubuntu version updates and Ansible really didn't help with issues like "that package is now no longer in the PPA you got it from".
Administration doesn't take a lot of my time, but when I do need to do it, it can take a solid day of focus to make sure I know what I'm doing, make the changes, recover from the pitfalls I always forget (e.g. "this Ansible step fails in --check mode because it depends on a file that is created in an earlier step, which doesn't happen in check mode") then work through the inevitable issues. I wouldn't want to do it without Ansible, but it's not "simply" anything.
If you stick to shipping containers it gets rid of 99% of these problems at the cost of some extra storage for the N copies of runtimes. Then your base infrastructure is reduced to “something that runs containers” which can be anything from vanilla docker, to docker-compose, to one of the many diy-PaaS platforms, to a full blown k8s cluster.
I've maintained large systems and small systems. FreeBSD and Linux systems. I helped build and maintain the serverless platform that hosts the Netflix API. I managed build systems and CI/CD for docker images deployed to k8s and built a package manager to address the instability that is inherent with rebuilding artifacts the way tools like Docker do (which are already a marked improvement over Ansible).
Ansible is essentially logging in and running a series of shell scripts. This works great in isolation, but do it long enough and you'll realize a lot of things you thought were idempotent, atomic, and infallible are not. Most package managers are glorified tarballs with shell scripts wired up to lifecycle hooks during install. You YOLO unpack them into a global namespace and hope for the best. With any luck, when something surprising happens, you can just rerun your script to bring the server back into a good state. But often times the server just ends up borked and you have to throw it away and start over.
K8S somewhat addresses this by maintaining the desired state in a declarative format and comparing the actual state against the declared state in an eval loop. But K8S is absolutely massive and unbelievably complicated. Most declarative systems are non-trivial. The closest I've seen our industry get to this ideal is Nix.
Linux itself is a beast, a reliable beast, but it's a chunk of software I don't think you can just wave your hand at and say "this is easy!" It's easy because it works. When it doesn't work, it's absolutely not trivial.
And this is the core of it: everything you just listed off that makes server administration easy has no delegation of responsibility. They are abstractions that you ultimately own. When they stop working, that's your problem. The Ansible project has no vested interest in the health of your server or the success of your CI/CD pipeline. They have no engineers standing by to help you bring your site back up. That's all 100% you even if you've pushed it down under the covers.
Compare that to my serverless deployments. I pay a vendor to be responsible for everything I possibly can, and everything I end up being responsible for I keep as minimal as possible. These deployments aren't mine, they are my customers'. My customers are small to medium sized businesses (for my fortune 500 contracts, I build the systems you're talking about and a whole lot more). A small to medium sized business can not maintain Ansible. They are mechanics, plumbers, drywallers, etc. They are not Linux System Administrators. And I'm not here to milk them for money, I want to get in, get done, and leave them with a stable system that requires minimal maintenance. I do that by having vendors lined up that are responsible for the system running below my software and those vendor's support contracts are a lot cheaper than my weekly rate.
I updated my post. I meant with "deploy as VPS" "deploy as a (virtual private) server with a service like render, heroku".
I agree the sys admin stuff, no thanks. the appeal of "serverless edge", at least to me, is ease of deployment. but, that ease of deployment is tooling / git integration rather than the underlying architecture.
the benefit of "your functions are globally distributed" has yet to appeal to me. for large ecommerce maybe. but then again, just deploy your servers in a distributed fashion which is "easy" if no state persistent is involved and tooling like fly.io is emerging.
to be honest I think much of this is also solved by thinking of VPS as "docker compose on a VPS" (or whatever container tech you want to use to abstract away the bare-metal sysadmin stuff).
I honestly think with containers there is much less need for stuff like render and heroku, even more so if you use SQLite and remove the hosted database complexity.
In fact in many ways it's even better - for instance the ability to run locally with an exact bit-for-bit production environment. Can't really do that with a PaaS.
Is there something about Deno that makes Edge functions good to be based on it - Netlify also has its Edge functions in Deno https://docs.netlify.com/edge-functions/overview/ maybe just coincidence, but seen two edge functions in Deno implementations in the last hour so makes me wonder.
One reason is that Deno can run on v8 isolates which start up really quickly. This makes the deploys near instantaneous and helps with cold start time too. The DX with Deno is pretty neat since it comes in bundled with a test runner, doc generator, linter, bundler, etc.
Added a comment in the discussion [1] with a video and a quickstart guide on how you can configure VSCode to work with Deno. Let's continue the discussion there in case those steps don't work for you.
With respect to supporting other runtimes, it mostly comes down to focussing on providing a great experience with one runtime first. And based on your feedback, we still have some work to do there.
This looks like you're complaining that your development environment is not configured to work with Deno and is instead configured to process typescript files for a completely different runtime? Deno's tooling is top notch. The fact that most developers' systems are all setup for Node or some Node-based compilation step is hardly the fault of Deno.
Editors in general are still figuring how to have and manage configuration that is flexible enough for different language servers, linters, formatters, test runner, debuggers, etc. It's not really fair to either Deno or even your editor, though your editor is clearly more "at fault" in this scenario.
does any one have good practical experience running Supabase locally? Its against their business model to have this run well so I'm a bit scared of the growing lock-in.
(P.S. if Supabase is listening some customers like myself would be willing to pay a significant amount for support in a kubernetes/docker deployment running on our servers. We currently pay $50k - $100k simply for enterprise support contracts for each Cloudera, kafka, Kong, Flink... Not your target customer group so would be silly to add thoughts out laud )
Yes! `supabase init` and `supabase start` gets most of the way there! Customizing the ports in `supabase/config.toml` makes having multiple supabase instances running easy. This works great for all the Postgres, auth, and storage things, as well as real-time functionality. My engineers have a single package.json script to get everything up and running.
I work at an agency, and the dev teams run supabase locally. We use Prisma for migrations and maintaining our SQL functions and storage configurations. `supabase stop --backup` is our friend.
Not being able to run multiple edge functions at a time kept us leaning into Cloud Run and Cloudflare. I'm excited about today's announcement.
I have run self hosted Supabase for awhile in Kubernetes. One of the main gotchas I will warn against is becoming too attached to the CLI, as it’s currently quite tied to assumptions that you’re using hosted Supabase.
Also be prepared to do your own infra work if you want something scalable and production ready. The HA guarantees are kind of “bring your own” eg not really there out of the box and IMO the default docker compose setup provided, while thorough and working, serves as more of a proof of concept of the system than something you should actually be running in production.
Self-hosting is easier said than done. You can do it, but maintaining the infra, scaling, performance, security, updates, downtime, all of that is a _giant_ overhead. It's not against our business model for self-hosting to not work well. Everything we do is open-source and we don't "supplement" it with features from closed source code.
Hey there - Tim here from Supabase!
We have some options to explore here so probably best to connect on a call to get a better understanding of your project and how we can best align to help!
Can you send me an email on tim@supabase.io and we can tee up a time?
Cheers,
Tim
If I need to do more than one query to the database in series, my intuition is that it would be faster to make those calls in the same region as the database, rather than at the edge. It seems to be true using Vercel's playground[1] (towards Supabase).
Any guidance for Supabase based apps? Is it possible to run my functions close to the database?
for when it really matters, then you should just use a Postgres Function. That will be an order of magnitude faster than anything you do to match the region.
Supabase has auto-generated APIs (using PostgREST), so you can execute a Postgres Functions like this:
How is the real world WASM story for typical server side languages like python, ruby, etc. these days? Last I looked at it WASM seemed to have a lot of warts and complications (and limited real world language support) vs. just spinning up a native process in a container. Can you go from for example python flask API source file to serving requests in WASM with minimal fuss or loss of functionality vs. native CPython?
This is the third big release this week (along with a Postgres Pooler and a Logging Server).
If you're wondering why there are so many, we generally "build for three months" then do all of our big product announcements in a "Launch Week". We're 2 days into this Launch Week so there will be a few more releases to come.
I think this is great, but a growing question in my mind from these is what is the long tail plan for maintenance of this constellation of stuff? Especially because you guys are building on technologies that you can’t really just casually find developers for eg Elixir and Haskell. I keep seeing missing critical features in Realtime and Postgrest, and I know these are armies of 1-2, but it does kind of worry me when I think about investing time and effort into the Supabase ecosystem as a whole seeing more and more launches while there isn’t much action on existing issues.
> while there isn’t much action on existing issues.
That one is planned but it's not high priority because it can be solved already with SQL functions[1][2] in a flexible and secure way(GROUP BY is an expensive operation, it cannot be exposed to the frontend just like that).
I do get that some conveniences on the JS libraries are missing but they're not blockers(or critical features IMHO).
Thanks for the response Steve. In general, I unfortunately don’t agree that this is a “convenience” feature or that this alternative you brought up is “flexible”.
This way of doing things is just so much more painful in terms of ergonomics we elected to instead escape hatch into raw SQL with a different library (Sequelize) in our case. Black boxing some of our business logic into views or RPC is just not something devs I work with are comfortable with. Having business logic live in the db like this just unlocks a whole new world of pain for people who aren’t used to it. Now I have a whole new class of problems too, now I have to have unit tests in the db with pgTAP and now I as a dev have to decide when and where this kind of logic should go in the db vs in the application layer and how to test it. In general I would really call this not a viable alternative at all unless there are some comprehensive design documents that say “do this stuff as RPC in the db because it makes sense to do it there, do this stuff in the application layer”. When I tried selling this ‘workaround’ in its current form to other devs in my org, they all balked at it.
Maybe I’m thinking about how Postgrest is supposed to work fundamentally wrong and all of this stuff is generally supposed to live in the db. But there certainly isn’t any guidance directing that in documentation currently. And in general I think if you want to advocate for a design where you split business logic between your application layer and your db there should be a plethora of examples where this has been done successfully, especially because that idea goes very much against how a typical developer thinks about system design.
It’s weird because I can do everything else in a “typical CRUD app” the nice way with Postgrest except this. It feels like the part of the codebases where I have to deal with this (and indeed, it comes up every time) is a constant wart in an otherwise pretty good setup.
Please don’t take this as me hating on Postgrest. I think it’s absolutely fabulous software and I am in genuine awe of what you have achieved. I can only aspire to have your work ethic and vision when it comes to this kind of stuff. I just want to help you see how myself and the devs I work with use it in real life and how they react to Postgrest in the context of using it when trying to make maintained business applications over years of time.
Usually the teams know what they need to build based on user feedback (which we get a lot of).
We don't have Product Managers, the developers are expected to be very product-focused and user-facing. After every launch week we ask them what they want to build and then they get to work.
Hi! Congratulations on the launch. I'm a big fan of what you folks are doing over there and I genuinely love your product.
Would love to keep in touch to see when the changes to this will happen. Also speaking as a ceo & founder, my natural inclination is to keep that culture going, small and avoid hiring a PM as long as possible. As I reflect on my operating time, I wonder if it was a mistake not to find the right Product Manager (sic) early so that they can be inoculated with the culture as we grow. My estimation is that the ramp time for this would be at least 6 months (a year?) to have that future product manager operate with the same founding ethos.
Lmk if you're open to chatting about it. I'm honestly super curious.
> wonder if it was a mistake not to find the right Product Manager (sic) early so that they can be inoculated with the culture as we grow.
That's an interesting thought. We do have some techies "acting" as PMs across the team, but i can see how we might one a dedicated person at some point.
always open to chatting to other founders - feel free to reach out.
Interesting. I've never worked in a completely tech-led company before, but it seems like a good approach. My philosophy on it has changed a little: I used to think PM should be embedded in technical teams writing tickets, à la Scrum Product Owners, but now I think things function better when the team writes its tickets, and the PM works at a more general level: features, direction, marketing, legal, regulatory, etc etc.
we hire very slowly: when we have a problem to solve and nobody is owning the problem in the current team, we write a job spec specifically for this role and hire someone.
At the moment we don't have any Rust jobs, but we always you can keep an eye on supabase.com/careers or follow us on twitter to keep updated
I’ve been using Deno Deploy as as a paying customer for the last year, both because I want to support what they’re doing and because I appreciate the amazingly tight deployment times.
That said, it’s suffered for the lack of access to the file system or dynamic imports which makes things like MDX challenging without adding additional build steps.
Do Supabase Edge Functions allow read/write file system access? What about dynamic imports?
Hi Mark, hosted Supabase Edge Functions would still run on Deno Deploy, so those limitations would still exist. However, we plan to introduce file system access via integration with Supabase Storage. This is still a rough idea stage, maybe we'll have a solid answer in a couple of months :)
For dynamic imports, we haven't looked into it since Supabase users haven't requested it. If you can open an issue on Edge Runtime repo [0] and explain how you intend to use them, we can probably work on a solution.
Request pre-processing using JavaScript is amazing! Will I be able to deploy an Astro app on supabase edge functions? I'm building a multi-tenant B2B2C product on supabase that's deployed on a customer's sub-domain, so writing reverse proxy logic in js would be a boon.
We currently block HTML responses on Functions and Storage, but we are considering relaxing that limitation, so that you can host static sites on our Storage. And allowing user-defined functions running on every request to Storage in the edge runtime would get you the reverse proxy logic you are looking for.
just wondering, what are the main places where deploying on supabase is better than deno-deploy? what's the main integration story with the rest of supabase?
It's as bad as Deno Deploy has always been (used by Netlify edge functions, Supabase, and others).
To me CF Workers are more about interacting with the CF CDN, doing lightweight HTTP stuff, etc. I would never use Workers to build a full server application. Their custom runtime locks you in, it's extremely barebones, and obviously almost no NPM modules are compatible.
They are working on Node compat which will probably help but I feel CF hasn't been investing enough into their cloud products (Workers, Pages, etc). DX is still hit and miss and progress overall feels glacial.
My superficial impression is they've been investing more in their networking services for enterprise customers but don't take my word for it. I don't pay much attention to their announcement weeks.
Any thoughts on implementing really "enterprise" languages like Java or C# so you could attract "the big boys" (corporations)?
I know the corporations I work at are very anti-node.js and very pro-C# and everything needs to be approved by a tech committee. I don't think we could get Supabase approved (and instead we fork millions a month to Azure instead).
Just a bit of feedback. I'm not a fan of C# or Java at all these days given how good Rust, node.js/Typescript/Deno have become. I just now work for 40,000+ person companies where I know this would be a non-starter unless you were super lucky to get all approvals aligned and be on one of the "cool" team.
Maybe not the pivot/user feedback that aligns with your goals. :D
We won't add support for other languages in our Edge Functions (beyond WASM support), but we are adding native support for other languages in our client libraries. For example, we already have C# support: https://supabase.com/docs/reference/csharp/introduction
The idea is that you'd just use the client libraries in your favourite framework.
We are planning to focus on the Deno and WASM ecosystem. If C# complies to WASM, we would be able to run it, but have a lot of work to do to make this seamless.
How about WASM functions? One of Deno's selling points is sandboxing but WASM takes it to a new level it seems. I've been looking into Spin and also Dapr with WasmEdge. There is even QuickJS which does JavaScript.
Would love to know what you're interested in doing.. we make Extism[0] which allows you to more or less ignore the lack of a "standard library" for something like V8 wasm environments.. Spin definitely makes it easier to compile high-level source code & have done great work on their SDKs to provide these "standard lib" elements. maybe Extism is useful, but if not please feel free to drop in our Discord[1] and chat w/ the team.
We are also exploring ways to create functions in WASM-supported languages and run them directly on Edge Runtime. We expect this would be easier WasmGC[0] is shipped and Deno's WASI support improves.
What does deno give you over node? I am assuming by first impression there are some standard libraries in these functions which aren't available otherwise in node?
What stands out for me about Deno over Node is its overall architecture.
1. It has a built-in permissions model [0] which gives end-user the control over what resources a script can access within a system.
2. As mentioned in the blog post, Deno runtime is extremely modular you can customize it per use case. This helps with security, performance and overall developer experience.
3. Deno has first-class support for Web Platform APIs, which reduces the need for reliance on third-party modules for simple tasks. A good example here is the native fetch module shipped with Deno. Unlike in Node, you don't end up with multiple modules implementing fetch.
4. Built-in tooling - Deno CLI has built-in tooling for formatting, linting, doc generation, benchmarking, etc. This is great for developer ergonomics, especially for people from Go and Rust backgrounds.
Supabase Auth is very similar to Firebase Auth. Both are easy to setup with just a couple lines of code. The advantage, imo, is that Supabase just stores your user data in a regular Postgres database so you can then set up relations with other tables.
I used Firebase for a long time, and Supabase more recently. I have no desire to go back to Firebase. No longer having to obsess over read/write count (like you do with Firestore), and being able to leverage full SQL when needed, is a game changer.
Hi, Supabase Auth Engineer here. We are very close to feature parity with Firebase Auth. Off the top of my head, the 2 features we're missing right now is anonymous authentication and multi-tenancy.
We're also gonna be launching something auth-related this week too so stay tuned!
I like the direction with working towards eliminating korg here!
One of the reasons I chose Directus over Supabase (they can also be used together, it's not either/or) is that I'm one of the rare engineers out there that doesn't like containers.
Well, to be more specific: I don't like containers for languages that don't need them. For example, a node monorepo doesn't really need a container, it's just a npm install away from working pretty much anywhere. Or a go static binary. Containers are great for languages/stacks that have more difficult dependencies.
That being said, Supabase just had too many moving parts for me. I'm allergic to complexity. I want my stack to be as basic as possible, I want to innovate in the business problem domain space, not the tech stack space.
I totally get the "don't reinvent the wheel" idea, but the complexity of a Supabase installation when you look under the hood at what's going on inside the container really turned me off. With Directus, it's just a npm dependency to your node app, nothing else. It'll automatically reflect over your database schema, and progressively enhance it with additional capabilities as you choose, automatic REST and GraphQL APIs with detailed and comprehensive configurable role based row-level security. An automatic and very capable Vue based admin app, and lots of other goodies. All the code is open source, well structured, easy to debug (it runs in-process, it's all just nodejs), and to fork/enhance as needed. Doing the same with Supabase would be difficult, to say the least.
I like the direction Supabase is moving with this - consolidating back to more of a monorepo with less internal services and dependencies.
I really like the Supabase Postgrest-js client too. One of the things that troubles me about directus is the home-grown DSL for queries. It's limiting and error prone, and impossible to do anything complex (you currently can't even query json fields, though that's coming soon I think). I think Supabase did a really good job with isomorphism with that library.
Another thing I think Supabase did right is to just focus on Postgres. Directus is hamstrung by their knex backend and multi-db support, imho.
I've been thinking that if I were to build something similar from scratch, I think I'd lean heavily on sequelize and build the whole shebang around that. It already has a very mature and capable DSL for queries that can be adapted to isomorphism with some symbol mapping and injection protection. It has the capability of enforced row-level security with scopes, and you don't have to deal with the issues around Postgrest or Directus' custom DSL. The only thing it doesn't have is automatic schema reflection - it's a model-first ORM. That's a solvable problem though.
I've had an "itch" about this for a while. I may be "retiring" in a year or two, and I'm seriously considering taking all the lessons learned from both Supabase and Directus and trying to create a super-simple sequelize based monorepo option as my next thing.
This is a webserver for Deno. It is MIT licensed, written in Rust, and based on the latest Deno Runtime (1.32+). It can serve TypeScript, JavaScript, and WASM functions.
This one is important for local development and self-hosting. For local development, it ensures there is parity between development and production. For self-hosting, it means you can deploy and manage your Deno Functions on your own hardware. We have provided a Demo repository for deploying to Fly.io [1]. The runtime is not production-ready, but it is in a usable state. Note that we still use Deno Deploy at Supabase, and strongly recommend it for your own Deno functions.
In the medium-term, this will reduce our tech-stack by 1 service. We'll remove Kong (a reverse proxy) and replace it with the Deno runtime. I'm excited about some other possibilities that this provides for self-hosting - one that we talked about was the ability to bundle a SQLite file with your functions, for a fully-contained and globally deployed webserver.
The team will be around if you have any questions
[1] Deploy on Fly: https://github.com/supabase/self-hosted-edge-functions-demo