Hacker News new | past | comments | ask | show | jobs | submit login
Supabase (YC S20) – An open source Firebase alternative (supabase.io)
1120 points by vira28 11 months ago | hide | past | favorite | 366 comments

I am ecstatic that someone is finally taking on Firebase. As a Firebase user, I find it invaluable. Their free plan and limits are very generous. The fact they offer not only a database, but authentication, hosting and perhaps one of their biggest features besides authentication: Firebase Functions. I have API's in deployment that are solely running using Firebase Functions and using Firestore as the database. The CLI is also another huge Firebase feature.

Following this intently, because Firebase has no true competitor, let alone an open source one. Nice work so far.

I've been using Firebase since 2016 in production and after all these years I find the service quite lacking.

Both databases are super limited and put all the burden of work to the client(s) for anything beyond very simple queries.

The functions have some of the worst cold starts I've experienced. Until very recently the dev experience was terrible but Firebase local dev was released a couple of days ago so this should be solved.

As for alternatives, AFAIK there is nothing that replicates the whole platform but there are better options for the individual parts.

- FaunaDB instead of Firestore.

- Vercel for edge static hosting + cloud functions.

- Netlify for everything except the databases


FWIW, the cold start issue (which was hitting me with regular 10-30+ second delays for even the simplest of functions) has recently been "fixed" (at least, I'm down to <1-2 second cold starts).

The specific issue I was running into: https://github.com/googleapis/google-cloud-node/issues/2942

Check this benchmark:


It displays data from the last 3 days of tests. Google Cloud has a max peak of 60 seconds for cold starts. It's the second worst after Azure.

Wow that is pretty bleak actually. 500ms+ in a world where edge cached serves at 5ms def means one needs to deploy these strategically only

Thats an amazing benchmark. Though you have to fiddle with the concurrency parameter to see that some of the slowness is caused by scaling.

OTOH cold starts are calculated with a concurrency of only 10.

The point of using serverless is handling massive traffic spikes efficiently and cheaply but 10 concurrent connections doesn't seem very massive.

Cloudflare Workers are doing much better than the rest in this respect (see the max value, the graph is misleading) but these have serious limitations (eg: max 50ms of CPU time). This makes them a bad fit for most situations.

In my current project which requires lowest possible latency I'm having more success with Fly.io. Instead of having cloud functions you create docker images which are distributed on their regions and scale up/down based on demand on that region.


That's like cloud run which I also prefer. I'll try fly it looks like an even better fit

I'm still getting regular 15-20 second cold starts.

Have you checked Amplify? Haven't had the chance to really do anything with it yet, but I've heard quite a lot of good from a few people.

Firebase smell the Google Maps model with its “free” for a long time plan, before turning up the price when most of the market is used to the product and are to deeply locked to the API.

Just this year, Google/firebase has dropped their flat monthly fee plan which increased the spark plans limits. Just this week cloud functions have been removed from the free spark plan, which will break a lot of peoples apps.

These decisions are alienating to me. I've been working on migrating my heroku mern stack app to firebase but now I'm having serious second thoughts. I was initially attracted by the free and flat fee plans. I won't ever sign up for a service that demands absolute liability without any way to put hard limits on expense. Without a way to put an absolute maximum limit on expenses, whether $10k or 0.01, I'll never upgrade my account on firebase.

I've spent this week reviewing some alternatives and standing up everything myself on a vps is more attractive then losing sleep over the possibility of my crappy code bankrupting me on firebase.

Wow, did not know they dropped cloud functions from the free tier, but don't you get a free usage tier as long as you provide a credit card?

firebaser here

Both are indeed correct: you must now provide a credit card to use Cloud Functions, and you do get a free tier on the Blaze plan.

In fact, the Blaze plan comes with a free tier that is bigger than the limit of the free plan was, but you can no longer use it without entering a credit card.

This change came while adding support for newer node.js versions. Cloud Functions now uses Cloud Build to create its containers, and while Cloud Build does offer a free tier, you'll have to enter a credit card to use it.

Why shouldn't they charge for a great product ? When Google charges it's a problem when they don't charge it's a problem.

No, when they tout the product as free or cheap, and then only later up the pricing by 10x, once people have built their business around the older pricing model. That's a problem.

The pseudo-democratic silicon valley model of freemium is the problem. Seducing you into using something with the word 'free' but then trapping you with a changing API and cost structure. They should be up front with the realization that our ability to pay THEM is not commensurate with use by OUR users. Fees should based on OUR revenue, not usage.

> Fees should based on OUR revenue, not usage.

So if someone builds a business that makes very little money but uses a ton of compute, the cloud provider should subsidize that? And conversely if you make a ton of money using barely any compute, you should pay your cloud provider a premium? This business model makes no sense.

The cloud provider would very much like to have the premium part of the model without the subsidy. Theres a reason a lot of b2b enterprise products dont list prices on their websites and ask that you contact sales instead. Value based pricing and all.

Seen a lot of non sense but the op's model is right up there.

If you want to use a lot of resources without paying the cost of providing them, you are the definition of a bad customer. Your service provider isn’t your VC, not lest of all because you aren’t giving them equity. Bad customers should always be fired ASAP so they don’t damage the business.

Ideally a business provides a service it’s customers are happy to pay the costs of, and not just the costs, but also a significant profit. That leads to a virtuous circle where the business is incentivized to continue to invest in the service, adding more servers, engineers and support personnel to add more valuable features for its customers.

Otherwise it’s just a matter of time before they post that farewell letter and give you the phase out date the servers are shutting down.

That's why I always look for an open-source alternative before using a free tier app. The free tiers are subsidized by paid ones so eventually users will have to pay up. Will be adding Supbase to https://opensource.builders.

This is not a charity

If you've built your business so it is dependent on someone else's pricing model the problem is of your own making.

Then I think you would agree:Don’t trust Google to keep the prices low. Or any cloud SaaS company that has raised prices

I trust Google the same way I trust any company I deal with whether they are a SaaS company or my local butcher. I trust them to act in their own self-interest. Hopefully that will be inline with my own but if not be prepared.

> Don't trust Google. Or any company.


Think you’ll have a hard time building a business without being dependent on a suppliers Pricing model

The problem is making yourself completely dependent on a single supplier.

God forbid someone wants to get money for their product

No one's saying they shouldn't. If the current business model isn't sustainable, then it's a predatory lock-in scheme to monopolize the market and then take advantage of the people who've become dependant on your product.

Couldn't agree more. If major price changes are required to make the product work well after customers have been "enticed" in with a reasonable cost, it is underhanded to say the least.

It's more than just "entice". Google has crippled apps which don't use FCM since Oreo by limiting their ability to run alternatives which have a blocking socket waiting for new information. Only choice is to use FCM or lose essential functionality.

FCM is part of the core API now. They don't want other shitty solutions constantly waking the device up and eating your battery. That is why only FCM i.e. a trusted source is white listed. It's easy to peddle half truths hear just give it an anti Google color and carry your pitch forks.

Former Firebase and Parse employee. Back when Parse was coincidentally working on GCM (aka FCM) support to improve reliability over our own blocking-socket push network Google threatened to shut down our customers if they used our network. I personally liked our approach which used our network in China and Google elsewhere. I would believe Google were just concerned with Android quality, except our customers reported that they were being scraped from meetup lists, interviewed, and told that Facebook was pricing Parse low to steal data from our users.

We went quickly from "Why shouldn't they charge for a great product?" to "they deliberately toasted competing solutions by creating a white-list with only their own."

Admittedly, some other solutions were shitty and ate into the battery, but this wasn't the case for everything and forcing everyone onto a proprietary platform if they need an always connected background service is not "a great product," but a defective one.

This is literally what Apple does and HN goes gaga over it. Literally only first party core services are allowed and given special privileges.

Yeah, I wouldn't touch an Apple product either.

I get your point that people expect differently from Google, but Google brought that upon themselves by marketing Android as an open source alternative to iOS. Now they've got the market share and developers, they're no longer interested in marketing it this way and are now apparently more interested in cementing the walled garden.

The battery issue is a valid concern, but I don't buy that this policy is purely to fix that problem. They took the opportunity to make sure people are passing all of their data through Google services in order to have full functionality on their devices.

And Android is open source. I'm sure you can Google the URL. The same can't be said about Apple. Do you see the iOS source anywhere ?

I'd rather have good battery than random HN flame bait. 99% of users will too.

Apple is routinely bashed around here, and rightly so.

I remember some software VC lady explaining this tactic among others for OSS businesses on the software engineering daily podcast. Her world seemed so inherently competitive, yet here she was, revealing her cards. Even though burning bridges to get ahead is typical behavior, it remains destructive. Calling it out helps us all move in a better direction.

If you don't want to pay then roll your own solution or use this startup

> Why shouldn't they charge for a great product ?

They should charge from the beginning.

> When Google charges it's a problem

Where is that ? It's a problem when it's abrupt.

It's easy taking the high road when you're not the one actually doing something.

As a business, if you're getting something for free, assume someone else is subsidising you, as there's no such thing as free lunch.

Not sure what is the point you are trying to make. What high road did I take ?

> if you're getting something for free, assume someone else is subsidising you, as there's no such thing as free lunch

And how does this matter to the commend that OP made which I was replying to ?

The problem is they've forced people to use the product by hindering their ability to run background services without it. Since Oreo, background services are turned to idle unless they receive high priority push notifications from FCM.

And this is done to tackle the horrendous battery issues.

> when they don't charge it's a problem

only if it means they'll kill the service, or "you're the product"-ize its users

I've read too many stories of people and companies being temporarily (and possibly permanently) incorrectly flagged/banned from Google (by some algorithm).

If your business depends on Google, you are taking serious risk. Even if you have a contact within Google who can champion your case, a loss of your Google services could end your business.

I wouldn't worry about this. Any examples?

I got my old google account banned when I enabled adsense (and did absolutely nothing with it). They blocked me from random number of their services so that account became unusable. And I dis nothing wrong. And no way for appeal.

idk what I was thinking with my comment above! I think it was misplaced.

It's really sad that Google and other big corps do this to people. They destroy their livelihoods on a whim. We really have to think of a way to divest ourselves from them (GCloud, AWS, I;m looking at you too)

Why, so you can quarrel with each one and dissemble at every turn? Everyone knows what GP is talking about. If you don’t think it’s a risk, then enjoy!

I am not sure if I can get behind this. Company invests work and money into a product, makes it free for the community and smaller businesses to use only to have people turn around and copy it and break the "you use my product for free and give me some publicity" contract. Something is ticking me off morally on this.

I see from the links that this is probably borrowing a bit from Postgrest? I've just discovered Postgrest in the past few days and along with Postgraphile it seems like all of these projects aimed at "getting back to the database" are a good idea, I like it. It always bothered me that the first order of business with any software development framework was to provide a root/admin database connection. It's like, this thing just explicitly said we're supposed to ignore all of the security work that went into making that database software for the past however many decades, and turn it into a dumb storage medium instead.

The lasting impact of Firebase is it proved to the world that being opinionated about the database in order to provide good tools is a viable business model, rather than force ORMs onto people to try and appease every different database flavor under the sun.

Good luck to you guys, I like the trend.

You're right about PostgREST. See my comments here: https://news.ycombinator.com/item?id=23321132

> we're supposed to ignore all of the security work that went into making that database software for the past however many decades, and turn it into a dumb storage medium instead

I couldn't agree more

With containerization and network orch layer, the db often isn’t the best place to enforce access perms.

I think that's debatable.

A UI without a persistent root/admin connection lingering in the wild cannot leak such a root/admin connection, can it?

How many retail corporations have had breaches resulting in huge credit card dumps that would not have happened if they had not been using frameworks with persistent root/admin database connections outside of their internal office network?

I may be wrong about the history of security work on databases (feel free to correct me!) , but I think the security model was built mostly for an age where a single DB would be used by many apps, thus access control at the DB level seemed natural to the DB admins.

Today, with managed/containerized DBs and Microservices and share nothing architecture, I’ve seen most apps use their own database instance, in which case the access control stuff seems to be more of an obstacle than useful. e.g. just the other day, I ran into issues because the user used for schema migration did not have access over some tables in my apps database in Postgres since it has table level access control.

Not every database application is a public facing web application.

Think about, for instance, a company's internal customer database (including billing, etc), accessed with internal tools. CSR/Support people need to be able to see billing status but not credit card numbers, for example. Different levels of management need to be able to generate reports, but again not get credit card numbers.

For these use cases the opposite is true. The further you take auth and permissions away from the database/devops admins, the more times you're going to need to reinvent auth and permissions boilerplate for every internal tool under the sun.

Which is a pointless exercise if none of the tools in question are ever going to be accessible outside of the company's VPN anyway.

Active Directory / LDAP are not abandoned technologies, after all.

Access control gets complicated though. Maybe a column should be accessible to a user, but only under x, y, z conditions that require hundreds of lines of logic to figure out. Do you really want all that inside the db?

It certainly does. I've been at a company that had an improper permission result in a CSR getting elevated access and deleting the whole customer database, including every single credit card number, by mistake (admittedly this was back in like 2002). Better still, the devops guy who wrote the backup scripts had recently quit, and they were broken, the backups weren't being done. The company-saving backup from the previous day wound up being on an internal tools developer's laptop. I wasn't a database admin there or responsible for the permissions, but it was quite a wakeup call for everyone to double check their stuff when the rumors of what happened got out of the meetings the next day.

The counter argument is, would it better to have 10, 20, 100 such possible situations mulling around the building every day, or just one? Maybe if there's just one, you put enough effort and people into the one to get it right. That's the pitch for AD / LDAP being used for all auth and permissions, and I think a compelling one at that.

What are the pros/cons of using entirely different (siloed) databases with uniform permissions instead of a single database supporting granular permissions?

Off the top of my head, I'd say security might be improved but ease of use may suffer. Interested in hearing other's thoughts on the matter.

The cons are inumerable. We've been down that road before and the whole world of business intelligence and enterprise data warehousing came out of it. When every functional unit has their own db, how do you answer questions of the business as a whole? How do you deal with data that multiple functional areas need? On top of that, managing a lot of small silos requires a lot more work and resources then the alternative.

Eventually you're going to want to bring that data all together and now you have to have data architects and engineers developing etl solutions and then your resulting data warehouse is going to be that singular database you were trying to avoid from the start, with all the overhead of building it and maintaining it on top of all the maintenance of the source silos.

See https://en.wikipedia.org/wiki/Data_warehouse

One thing to note is that the singular database used for answering business questions is often not customer facing, and thus does not have as strong reliability guarantees (In my experience). Most of the business analytics happens asynchronously (etl pipelines everywhere).

I'm not sure how to avoid this. The need to answer business question favors a single, queryable DB whereas the need to keep applications siloed and abstracted away favors multiple application data stores.

I'm curious to hear what others think about this as well. I run some WordPress sites in containers and I give each one it's own DB instance. I just create the root account with a strong password, give it to WordPress and call it a day. Makes ops much simpler although it's probably not the best use of resources (e.g. memory).

Any big downsides to doing things this way? I'm no DBA so I appreciate any insights.

It may be more cost effective to use the same db instance, but that has it's own issues, like 1 site dragging everyone down, all sites need to handle the same db version and upgrade together..

I guess it's also just more instances to manage in total, instead of 1 + n it's 2n.

You _could_ run the dbs on the same instance but not using the same DB process, via containerisation perhaps. That way you could reduce your instance count while still keeping (somewhat) operational separation..

Scaling, durability and backup. Your single small container DB may not be able to serve high requests per second. It depends on the architecture and caching, though. If it goes down, your app goes down, no failover or replicas. And it is difficult to backup many isolated DB instances.

I am a strong believer that Firebase model is the way forward. Huge fan of Firebase but it is scary to have no proper alternatives. Yes, sure you can have a similar setup with AWS or bespoke implementation but they are not in the same spirit.

Very glad to see people working on alternatives.

IMHO though, what makes Firebase special is the glue they use: Account management and libraries come for free so you can start thinkering on the interesting parts. Firebase is not just Firestore, it is a set of tools that work together seamlessly.

When this is achieved, your own product feels mature as if you are working on implementing a product on top of FB or other established platform and everything "boring" is handled by someone else.

In my mind, Firebase is a very flexible CMS with sane defaults.

Amazon seem to be working on bringing Amplify up to the same standard as Firebase, and it's certainly getting there, so in the near-future there may be quite a few alternatives.

I wrote a master thesis where I compared several realtime database products. I promise you amplify is no where "getting there". It is a crippled version of firebase and there is no fix in sight. For example you cannot even do sort-queries with datastore or Amplify is not working with angular because they have broken typings since 9 months with no fix. And the list goes on..

Neither of those issues surprise me, from what I've seen of the project.

I'm not familiar with the project historically, as I've only been using it for about 2 months, but it seems to me like there's been an increase in activity since the start of the year - like the package modularization, new UI components, and new docs (which are still pretty bad to be honest). I totally agree it's sub-par to Firebase, and if you stray anywhere off the beaten path you're completely on your own (i.e. not using React, seemingly (!!)), but I do think it has the potential to become a viable alternative.

My primary issue with the platform is the choice of a NoSQL database, which, in my view, just doesn't match the majority of application requirements - if you want to do any sort of text search, you have to spin up a whole sodding Elasticsearch domain, which are expensive as hell for a new product and take literally 20-30 minutes every time they're updated with an `amplify push` command. I also waited over a week for one to be deleted not too long ago. That's why my plan is to replace the API with a Lambda function running Postgraphile with some JWK logic to use Cognito for authN, while keeping the idM and file storage stuff.

Would love to read the thesis if you can share!

Same here. Is it maybe available from a university website?

I find the overall Amplify offering to be a mess. The hosting and authentication works pretty well but the rest of it just feels like they put duct tape in front of a bunch of existing AWS services.

I’ve found Hasura+Cognito+S3 as the closest equivalent to Firebase. It’s not as neatly wrapped up into a single product but it’s pretty close.

Yes I completely agree actually, I've been using Amplify to prototype UI quite well but my plan is to switch to using Postgraphile (or maybe Hasura, I haven't decided yet) + Cognito to replace the backend - primarily so I can use a relational database that isn't Aurora Serverless. I do feel like they've made good progress on the project in recent months, but it really needs another good year of work on it before I'd consider it cohesive enough to stick with.

This looks great, however at first peek it doesn't mention anything about auth. Do you have any plans for that? For me this is the topic I most want to just delegate to the service.

Dashboard, realtime stuff, etc are great too. RESTful APIs I can of course get with PostgREST [0], which is insanely excellent, so the value I'm looking for is to have everything managed, from hosting/storage, to security, to all the other annoying nitty-gritty that I'm likely to get wrong.

[0]: https://postgrest.org

See my comments on Auth here: https://news.ycombinator.com/item?id=23320443

> We want to nail the auth and we're looking at leveraging Postgres' native Row Level Security. This is a tricky task to "generalize" for all businesses but we have some good ideas (we're discussing in our repo if you want to follow - https://github.com/supabase/supabase)

> RESTful APIs I can of course get with PostgREST

That's what we use! See our supporting libraries:



https://github.com/supabase/postgrest-py (coming soon)

Also, I'm a long time user and a huge fan. My previous company is featured on their docs (blog post: https://paul.copplest.one/blog/nimbus-tech-2019-04.html#api-...)

You might want to look into how Hasura does auth. IMO it's really well thought through. However, it doesn't handle the actual authentication, just the access permissions afterwards. If you could layer firebase-style auth "strategies" (email/password, Facebook, Open ID, etc) on top of that kind of system, while keeping everything accessible directly in the main database, that'd be pretty awsome.

IOW, Hasura does authZ vs authN, right?

For a little more context for those scrolling past, I'm assuming this is around: authentication vs authorization.

Check the person is who they say they are (authenticate), and then check they're allowed to access the thing they want to view (authorize).

The first is quite easy to abstract, the latter is basically custom to most applications (for different definitions of "custom").

Just FYI, making a good auth solution in Supabase will instantly make me a customer.

Just chiming in... Auth, free hosting and SSL is what drove me to Firebase in the first place. DB is the last thing.

Just thought you should know, when I read "Firebase alternative", I got excited about something else.

Good feedback - thanks. Stay excited! It's something we will build and we will make the experience as simple (simpler?) than Firebase

Likewise. Firestore client API is nice and convenient, but it is so limiting on queries. One has to know the limitations and plan ahead on how to model the data around them, or risk the ability to do even some fundamental queries [0]. [0] https://stackoverflow.com/questions/47251919/firestore-how-t...

Oh, that's super cool!

I'm even more excited for your project in that case. I hope I can make the time to build a prototype, right now I'm doing the same for retool, and I feel very spoiled for good choices all of a sudden when it comes to backend abstractions.

nice - if you build something and want it featured, let us know: alpha@supabase.io

Are you also using PostgREST in the backend to provide the APIs, or are your REST APIs and hence client libs PostgREST compatible?

Each database gets it's own PostgREST instance, a realtime server, an API proxy, and a Postgres admin api (which we are building) to manage their schema programatically.

We have client libraries for PostgREST, which then gets installed as a dependency in our main client library.

Any thoughts on Auth0?


I saw this passing by a lot of times, but either I misunderstand the pricing or it is indeed 'enterprise' pricing. Because even small sites I launched have easily 10s of 1000s of users and some have had millions => I cannot see this pricing work at all on any of those without me closing the doors (having margins that are just too small or just making losses) vs making a really good living.

I agree with this. I think Auth0 at one point was building tools for indie devs, but are now focused mostly on enterprises.

What have you used instead of Auth0?

We have a microservice for this which we use. It's internal and was extracted from systems/practices that grew over years. Basically it works like Auth0 many of the bells and whistles, but it's just a service in our network. And, probably not a surprise, TCO (cost of running/maintaining it) is a rounding error (few $100/month) for millions of users.

I have found Cognito to be a similar enough, if not more quirky, offering, but at a fraction of the cost. Auth0 is just too expensive.

Love it, used it, mildly dislike the pricing.

But generally I'm of the opinion that auth tightly integrated into the platform has a lot of advantages. My dream is to avoid (almost) all glue between my components, I think Firebase became successful in big part because of that.

Recently released similar thing for gRPC + Go + PostgreSQL stack: https://github.com/sashabaranov/pike/

Features include:

• Generates CRUD operations for given entities

• Generates SQL migration files

• Gives developer full control after project is generated(nothing added on top of raw SQL/Go)

• Minimal dependencies(basically lib/pq and grpc)

• TLS out of the box

What the fuck. This is insanely cool!

Almost like Rails.

Guys, check out https://github.com/deepstreamIO - open source MIT - I've been a long time user and it works great. It's a typescript code base, very readable and straight forward, uses protocol buffers for messaging and uWebSockets.js for the websocket server (ultra fast). You get realtime records, pub/sub, events, rpc, fine grained permissions, http endpoints and basically you can hook up to any backend (currently there are connectors for postgres, rethinkdb, elastic and others).

We need more people to use it, cause the original developer is going on maintenance mode and we're trying to strengthen the community. It rocks!

I've been thinking about the need for easier to use databases for a long time. I previously started a company based on selling database software, so I've seen a lot of problems in the space. I honestly most databases are too hard to use and there's been no major improvements here in the last few decades.

Take Postgres. You write code in SQL, a programming language unlike any other mainstream programming language. Instead of writing code with for loops and if statements, you write it with joins and where clauses. On top of that, Postgres has a "magical query optimizer" that takes your SQL and figures out how to execute your query. Unless you have a good understanding of indexes and how they impact the query optimizer, you'll have a hard time getting Postgres to be fast. I still regularly say WTF when optimizing Postgres queries even though I've been doing it for years. That's not to mention there's tons of database specific terminology like tables, rows, schema, etc, that you have to learn before you can become an effect user of Postgres.

As much as HN likes to bash it, I think Mongo has done the best job of creating an easy to use database. With Mongo, you can store JSON and you can pull JSON out. Of course, I would never use Mongo personally.

I'm hoping that Supabase is able to bring about the next-phase of databases by not just making it possible to make a database fast, but by making it easy to do so.

This is a great comment. Especially this:

> I'm hoping that Supabase is able to bring about the next-phase of databases .. by making it easy to do so

We chatted to a lot of developers at the start of this year. Most of them thought that Postgres was amazing and wanted to use it, but they still chose other options (like Firebase) because they were easier. At that point we made "database UX" our main focus.

Was going to quip that I built a homegrown version of this in elixir which is the best stack to implement something like this if you want it to be scalable.

Then I saw that THIS was written in elixir.

That made me take this much more seriously!

Elixir really is the perfect tool for this job. Also there are a few features we are building into the realtime server that will make the system really shine (and extensible): https://github.com/supabase/realtime/issues/33

Basically we are refactoring it so you can pipe your database changes anywhere - webhooks, kafka, serverless, slack etc

Oh wow, Supabase is in Elixir! You guys hiring?

>You guys hiring?

Soon! We are joining the next YC cohort and we will hire after that

Only some parts are elixir. The full stack is here: https://supabase.io/humans.txt

Apologies for shameless plug and off-topic but I am very enthusiastic about companies using Elixir.

I am a senior dev (18.5y experience in total), currently focusing on Elixir and Rust (3.5y with Elixir so far, learning Rust fast and currently making an Elixir<=>Rust bridge for the sqlite database).

I'll be checking out your page every now and then. Do post in the "Who's hiring?" thread when you are ready to hire! I'll be looking for you there as well.

We'll definitely post in Whos Hiring around September. I'll keep a look-out for your handle here and around github

My GitHub handle is `dimitarvp` (and almost everywhere else on the net really). Thanks for the chat! :)

I was looking into debezium to do this but an elixir solution would be fantastic. I'm trying to solve this in my own startup. Are you guys on the elixir slack? would love to sync up.

Our realtime server is built with Elixir. Feel free to email me directly if you want to chat (email is in my profile)

Congrats, the project looks solid. It's obviously targeted to JS devs who are familiar with firebase or want an easy/similar abstraction for realtime apps. It's great to have several great frameworks tackling the same problem with different flavours (Hasura, Phoenix, Supabase, etc), all with postgres as first class citizen.

I feel like the key is in the choice for the realtime backend (Phoenix/Elixir). I've built real-time firebase apps (JS) in the past and today my choice would probably be Phoenix. It also saves many LoC (80-90%?), and it's a joy to work with. Reduces the JS fatigue and opens up a new paradigm that makes programming fun again.

Realtime is definitely our strongest feature so right now. We started Supabase to solve a problem at our previous startup, where we built a chat application using Firebase. Within a few months we discovered our customers were receiving their chats out-of-order and it took another few more days to figure out it was due to some Firebase quirks.

We migrated to Postgres and started with Triggers/NOTIFY. But that also has some limitations (8000 byte payload limit), so we implemented the Elixir server: https://github.com/supabase/realtime

To be a Firebase alternative, you would expect a full-stack solution; DB, Auth, Cloud Functions, Analytics...etc. That's the main selling point of Firebase.

This just seems to be an Open-Source implementation of Firebase's "Realtime Database", which is kind of superseded by the new "Firestore", which is much more popular.

However it's a smart business move, as many products are locked into the "Realtime Database", so this could provide an easy exit door to Open-Source software.

That's true. See my comments here: https://news.ycombinator.com/item?id=23320443

tldr: > We've been building furiously since January but (not surprisingly) we haven't yet reached feature parity. But we will :)

I'm a consultant and I love the idea of having Postgres instead of NoSQL on the backend, coming from Amplify.

For me the MVP, before I could use it for my commercial projects, would be: DB+auth. At that point, I could switch - and probably would.

Also, kudos to you for being the diametric opposite of a 'useless' startup. Not only would I use this once I could, I'd talk it up to everyone in my space.

Thanks for your feedback. We hope to have a proper Auth system in place before our official launch in September.

We like building things that people will use. The number of features that Firebase offer is vast so it might take us some time, but we will get there and we’ll make sure we do it in a way that benefits the opensource community

Feature parity with Amazon Firebase?

Let me assess this, with the tip of my napkin:

1. linkedin 'firebase' + amazon (current employer) -> 695 people.

2. 10% of above = 70 FTE

3. They started (googled): They founded Firebase as a separate company in September 2011 and it launched to the public in April 2012.

1+2+3 = It's impossible you will have feature parity in next 10 years.

BUT it's possible you can achieve pareto parity, leaving off the table most complicated 20%. Good luck :)

A bit of a plug, but if you wish for something that has the Postgres reliability, the taste of nosql but with strong typing and a json output a la graphql, give a try to edge db:


The team behind it is awesome (one is a python core dev), and it's a FOSS, but they have been using it internally for some time.

I feel like this is close to what I'm looking for but not quite there in terms of minimal boilerplate backend. Basically what I want to do is write a GraphQL schema and have that be the ultimate source of truth. The schema then should generate the full DB (honestly I don't even care if it's SQL, NoSQL, etc. the idea is that part is abstracted away). Then the schema can generate fully typesafe queries for the front end and serverless functions, similar to what graphql codegen does. I'm almost there with Hasura, but Hasura still requires definition on the DB side and generates the GraphQL rather than the other way around.

I'm wondering how FaunaDB's GraphqL would suit you? It's basically: "drop in schema, get API". You can't do everything that you'll ever need yet but you can go quite far and the implementation of each query is efficient since every query is one query and thus one transaction, is backed by indexes that are automatically generated and finally we are quite suitable for GraphQL-like queries. (because those are a bit like tree-walking queries for which the 'Index-free Adjacency' concept in graph databases is very suitable and FaunaDB's References are quite similar. Although we're not a Graph database we don't have problems with huge graph-like joins because of that. besides that, the compatibility of our native FQL language made it ridiculously easy to generate an FQL query GraphQL queries)

If there is something missing in our GraphQL feature set, feedback is very welcome :).

This looks really interesting thanks for sharing.

That's pretty much why people created ORM for, though.

E.G: Django ORM will let you define the schema, and be the single source of truth for you DB and your REST API.

Right but the idea is I'm trying to approach it from the front end and define the schema using the GraphQL queries which the front end will use.

Checkout: https://prisma.io

My app is currently built on top of Firebase but I’m keenly aware of the cost of lock-in as traffic escalates.

Is this a replacement for the whole suite of Firebase offerings (hosting, auth and data store) or just a subset?

What’s the migration pathway for someone currently on Firebase?

We've been building furiously since January but (not surprisingly) we haven't yet reached feature parity. But we will :)

> data store

Yes - our data store is just Postgres so you can migrate in/out - no lockin

> hosting, auth

Not yet. We want to nail the auth and we're looking at leveraging Postgres' native Row Level Security. This is a tricky task to "generalize" for all businesses but we have some good ideas (we're discussing in our repo if you want to follow - https://github.com/supabase/supabase)

> What’s the migration pathway for someone currently on Firebase?

This is also something we're building - a migration tool. Mapping NoSQL to RDBMS is complex, but something I tackled in my previous company. We'll build it so that Firebase and Supabase run "in parallel", which we can do since both have realtime functionality. And then when you're happy, you will be able to switch off Firebase

That's pretty much what I've been wanting for years. Open Source software that make row level security super convenient via Postgres and OAuth and that can handle subscription.


Interesting, thanks. I've bookmarked your platform, I'll circle back if/when migration away from Firebase pops back on the agenda.

It looks like a way to turn Postgres into a Firebase real-time database alternative, so not Crashlytics, Auth or file storage AFAIK (and I'd suggest making it more obvious on the website if I'm wrong!)

Disclaimer: I work on Firebase but I'm always speaking for myself on Hacker News.

This looks really cool! Honestly I think the Firebase comparison may be throwing some people off here because this is a SQL-based system, which means there's a huge base of existing tools/techniques/knowledge to build from.

I like any tool which makes it easier to build an app. It's 2020 and we still start every app like this:

  * Pick a database
  * Spin up a server
  * Connect the DB to the server
  * Create a REST API for the server so the client can talk to the DB
  * Somehow make that secure enough
  * Write a bunch of CRUD code on the client
What a waste of time! Glad to see people like Supabase taking on this problem as well.

Maybe I am an old fart, but what’s wrong with the steps you mention? What would your ideal sequence of steps be?

If the answer is “just call an api to handle your data” sure that works for POCs/Small apps. But I’m a bit hesitant to put all my businesses data inside a proprietary data store that I don’t control and which isn’t collocated with my app (Ie every data store request makes a round trip to the google server hosting the firebase app... maybe not a huge deal if the app itself is in google cloud?)

Disclaimer: I’ve only used firebase tangentially, haven’t built an app from scratch with it.

It's not bad doing it once. Furthermore I'd suggest you do it truly from scratch once just for the learning experience (no framework [Go makes this easy] or from TCP socket [synchronous with Python is easy]). However it's ridiculous going through the same sequence for every single app you build over a career. Nothing changes, it's all just boilerplate.

I don't want to ever write this sort of boilerplate code again, I want it generated from a DB schema or something similar [0]. There are better things to spend time on.

[0] https://news.ycombinator.com/item?id=23322300

To me there's a sliding scale between productivity, where you use a heavy framework like Django and Rails to do everything for you, and control, where you write boilerplate to stitch all your favorite single-purpose libraries together using your preferred patterns.

They each have their purposes. Django will get you to market fast with all the features you need, and keep you there for a long time. But it forces (through its library structure) and encourages (through its common patterns) ridiculously tight coupling.

I work on a Django Monolith now that runs an org needing to grow beyond it. We need something not quite offered by a Django library, or we need to move something with different scaling needs out to another service - it's all miserably difficult, because they followed all the Django recommended best practices The framework controls you, you don't control it.

Now we're back to writing boilerplate to enforce a semblance of clean architecture onto it. It's kind boring sometimes, but once a domain gets refactored out of the Django way, our ability to deliver features quickly and safely in that domain goes up 10x.

The "Fat Models" recommendation is one of the most destructive in my opinion: https://django-best-practices.readthedocs.io/en/latest/appli..., along with Django Rest Framework "Model Serializers". A JSON serializer that talks directly to the database is just madness.

So just don't use fat models. The only sensible way to use Django is to put all the business logic in service methods, not in models/managers or serializers/forms.

If all your business logic is in models then of course your app is going to be completely unmaintainable and it's going to take developers weeks to do things that should normally take a couple hours.

There is definitely a real problem in the Django community where lots of people have recommended architecting apps in bad ways, so then you get developers who want to implement the app the "standard" way that Two Scoops or whatever recommends. But Django itself is still a great tool, you just need to be willing call out your teammates if they're unable to think for themselves.

I was just acquired into a team that enthusiastically recommended that book. Are there any alternative references I could look at or point to as alternatives? I've used a good bit of Flask but don't have much experience with Django.

The book is actually worth reading, there are just some things that I strongly disagree with. The reason I'm writing my own guide is because there isn't anything else there that I like.

Completely agree!

I’m writing a Django style guide since all the existing ones are bad. If you send me an email then I’ll send you a draft, so that you have something to show your coworkers.

Absolutely. There's a big curve to understanding Django, Rails, .NET well enough to be able to prototype a real application. There's an even bigger curve to doing that in a maintainable way.

I think it's good to get familiar with a variety of ways of building applications over a career so you can pull from the best of them to, again, be able to focus on _business problems_ you have and can solve. To me that includes a sustainable development model and system architecture.

Can you provide a more detailed critique of Django’s “Fat Models” recommendation? How would you prefer to manage this logic?

TL;DR Django models are the database, which makes them the wrong choice for presenting a service-layer interface to the persistence. They are inherently unable to hide, encapsulate, and protect implementation details from consumer that don't care or shouldn't be allowed to access.

The Django model is a representation of the database state. It's an infrastructure-layer object. It's is _very_ tightly coupled to the database.

Your business needs should not be so coupled to the database! While it is very helpful for an RDB to accurately model your data, a database is not an application. They have different jobs.

(The TL;DR of the following paragraph is "encapsulation and interfaces") Your business logic belongs in the "service layer" or "use case layer". The service layer presents a consistent interface to the rest of the application - whether that is a Kafka producer, the HTTP API views, another service, whatever. Your service layer has sensible, human-understandable methods like "register user" "associate device with user", whatever. These methods are going to contain business logic that often needs to be applied _before_ a database model ever exists, or apply a bunch of business logic after existing models are retrieved in order to present a nice, usable, uncluttered return value. Your service layer hides ugly or unnecessary details of the database state from the rest of the application. Consumers shouldn't care about these details, they shouldn't rely on them (so you can fix or change without breaking the interface) , and they very probably should not be presented direct access to edit whatever they want.

If you do not do this and instead choose the fat models method all of the following will happen:

1. You will repeatedly write that business logic everywhere where you use the models. You'll write it in your serializers, your API views, your queue consumers/producers, etc. You'll never write it the same way twice and you damn sure won't test it everywhere.

2. You'll get tired of writing the same thing and you will add properties or methods on the model. This is the Fat Model! This might be appropriate for convenience property or two that calculates something or decides a flag from the state of the model, but that's it. As soon as you start reaching across domains and implementing something like "register device for user" on the user model, or the device model, you are just reinventing a service layer in a crappy way that will eventually make your model definition 4000 lines long (not even remotely an exaggeration).

3. Every corner of your application will be updating the database - via the model - however it wants. They will rely on it! Whole features will be built on it! Now when it's time to deprecate that database field or implement a new approach, too bad. 20 different parts of your app are built on the assumption that any arbitrary database update allowed by the model is valid and a-ok.

Preferred approach:

1. Each domain gets a service layer, which contains business logic, but also presents an nice reliable interface to anything else that might consume that domain. This interface includes raising business logic errors that mean something related to our business logic. It does not expose "Django.models.DoesNotExist" or "MultipleObjectsReturned". It returns an error that tells the service consumer what went wrong or what they did wrong.

2. The service layer is the only thing that accesses or sees the Django models aka the database state. It completely hides the Django models for its domain from the rest of the application. It returns dataclasses or attrs, or whatever you want to use. The models are no longer running rampant all over the application getting updated and saved willy nilly. The service layer controls what the consumers in the rest of the application can know and do.

You will write more boilerplate. It will be boring. You will write more tests. It will be boring. But it will be reliable and modular and easier to reason about, and you can deliver features and changes faster and with much less fear of breakage.

Your business logic will live one place, completely decoupled, and it can be tested alone with everything else mocked.

How your consumers (like API views)turn service responses and errors into external (like HTTP) responses and errors, lives in one place, completely decoupled, and can be tested alone with everything else mocked.

Your models will not need to be tested because they are just a Django model. They don't do anything that's not already 100% tested and promised by the Django framework.

We started moving off "fat models" at my job and onto DDD (service methods, entities, etc.), and I have to say after a year I'm not a fan. Here are my beefs:

1. If you're not using models, it's a lot of work to stay fast.

If you've got a Customer instance, and you want to get customer.orders, you've got a problem if it's not lazy. If it's a queryset, you get laziness for free, if it isn't you have to build it yourself. God help you if you have anything even remotely complicated. You also need trapdoors everywhere if you want to use any Django feature like auth, or Django libraries.

2. You have to build auth/auth yourself

Django provides really nice auth middleware and methods (user_passes_test).

3. Service methods only do things something else should be doing.

You might be doing deserialization, auth/auth checks, database interactions, etc. All of that stuff belongs at a different layer (preferably abstracted away like @user_passes_test or serializers).

4. The model exposed by Django and DRF is actually pretty good, and you'll probably reimplement it (not as well)

The core request lifecycle is:

request -> auth -> deserialize -> auth -> db (or other persistence stuff) -> business stuff -> db (or other persistence stuff) -> serialize -> response

We've reimplemented all of those layers, and since we built multiple domains we reimplemented some of them multiple times. It probably would've been better to just admit "get_queryset" and the like are good ideas.

5. Entities are a poor substitute for regular objects and interfaces.

We've mostly ended up wrapping our existing models in entities, but just not implementing most of the properties/fields/attributes/methods. But again, we have to trapdoor a lot, we have trouble with laziness and relationships in general, and we have a lot of duplicate code in our different domains.

6. We have way too many unit tests.

Changing very small things requires changing between 5-10 tests, each of which use mocks and are around a dozen lines at least. Coupled with the level of duplication, this has really slowed us down. They also take _forever_ to run.

FWIW I think you're right about jamming too much into models; I think that works at a small scale but really breaks down quickly. I think at this point, my preferences are:

1. Ideally, your business logic should be an entirely separate package. It shouldn't know about HTML, JSON, SQL, transactions, etc. This means all that stuff (serialization, persistence) is handled in a different layer. Interfaces are your friend here, i.e. you may be passing around something backed by models, but it implements an interface your business logic package defines.

2. The API of your business logic package are the interfaces you expose and document. The API of your application is your REST/GraphQL/whatever API--that you also document.

3. Models should be solely database-specific. If you're not dealing with the database and joins and whatever, it doesn't go in models and it doesn't go in managers.

4. Don't make a custom user model [1].

5. Serialization, auth, and persistence should be a declarative and DRY as possible. That means class-level configuration and decorators.

6. Bias strongly against unit tests, and rely more strongly on integration tests. Also consider using them during development/debugging, and removing them when you're done.

Does that seem reasonable to you? I spend a lot of time thinking about this stuff, and I would like my life to be less about it (haha) so, any insight you can give would be super appreciated.

[1]: https://docs.djangoproject.com/en/3.0/topics/auth/customizin...

I think we're agreeing on the majority of this. We have not chucked DRF or Django auth or anything. We've just created service layers to take the business logic out of the API views, API serializers, and DB models.

Each action looks like

1. Request arrives into the app, auth happens using DRF on the API view. This is all using Django & DRF built-ins.

2. In the API view: request data gets serialized using DRF serializers, but no calculated fields or model serializers or other BS. JSON -> dict only. The dict does not have models in it, only IDs: profile_id, reservation_id, whatever. Letting the "model Serializers" turn a JSON location ID into a Location model is how you get 10 database queries before you've done _anything_. At this point we don't care if the location_id is valid. We are just serializing.

3. Still in the API view: Dict dump from the serializer gets shoved into whatever format you're going to send to the service layer. For us this is often an attrs/dataclass. If we're calling the "Reservations Service" method "create reservation", we pass in location_id, start time, end time, and the User model. The User model in this case is breaking our policy of not passing models through the service boundary, but it's the one exception for the entire code base, because it's too useful not to take getting it for free from DRF's user auth. We would be basically throwing it away then re-calling for it in the service layer which is dumb.

4. Call the Reservations Service layer. The service layer is going to do n things to try to create the reservation. If it needs to insert related records, like in a transaction, cool. Its job is to provide a sane interface for creating a Reservation, and whatever related side effects, not to only ever touch the Reservation model/table and nothing else. The base of our Domain is Reservation, creating a ReservationReceipt and a ReservationPayment are entirely within scope. Use the Payment model directly to do this if there's zero extra logic to encapsulate, or create a Payment service if you have a ton of Payment-creation logic you need to extract/hide from the Reservation service. You can still manage it all in a transaction if you want. The point is that the caller (the API layer) doesn't see this. It only sees that it's calling the Reservation Service.

5. The Reservation service will either return a dataclass/attrs objects representing a successful Reservation created, or raise a nice business error like ReservationLocationNotFound (remember when you passed in a bad location id to the API, but we didn't want to check it in the API layer?)

6. API View takes the service response & serializes it back, or takes the business error and decides which HTTP error it should be.

Got it, yeah that makes sense. At a previous job, we invested pretty heavily in model serializers, but yeah they’re bonkers slow. Thanks for weighing in, really nice to talk about this stuff with someone with a lot similar experience.

Who does the same app over and over, and if then why not just copy it? With new tech there are a lot of churn, who knows Google will announce the shutdown of the db service tommorow, and next week you need to rewrite the app in the latest version of the framework. Meanwhile old boring tech will still work just fine 20 years from now.

I can't copy code I wrote at the previous employer. It's best to learn and stick to a framework, or write something yourself once you can steal from. I'm doing the latter with code generation.

Modern programmers don't get paid to write down the code per say, you both write code as well as do the engineering part. Or maybe if you are using a framework you are just typing code? I dunno. But if you have already solved a problem, it will take significantly less time to re-write, and you could probably make it better too as you know the weakness of the old implementation.

Until you forget it again. I'd like to never worry about the boilerplate class of problems again (CRUD, auth, auditing, filtering, pagination, etc.). That's what I'm working toward.

There should exist plenty of libraries, examples and documentation for those in any language. You shouldn't have to use third party services.

Ruby in Rails has this as a feature called "scaffold". https://www.rubyguides.com/2020/03/rails-scaffolding/

Rails is probably one of the best ways to not waste time. I knew they had some of this capability but I wasn't sure if they generated the whole controller, fully hooked up to the models, too.

The only thing that turns me off about Rails and .NET is that it's a big learning curve _because_ the projects are so feature complete. Since I haven't worked professionally in either of them, I haven't been able to get myself to learn either well enough to do rapid prototyping.

Agreed, this is a real problem. I recently did a test. I have about a year of rails experience but it was so long ago that I forgot most things and had to look them up. I also wrote quite a bit of node/express but it was in the node 0.10 days so I had a lot of catch-up to do.

I started a new app and I got a basic Users and Sessions model with endpoints running (I like this test because dealing with passwords and API tokens requires writing some code instead of generating it all, but also leverages libraries).

These numbers aren't useful by themselves since you don't know the details of the work I did, but they may be useful side by side:

Express/TypeScript: Took me about 24 to 30 hours to get it all implemented. I started with no ORM (node-postgres) then tried out Knex, then settled on Slonik.js. Because of that I had to re-write some stuff a few times. However, in a bare-bones "framework" this is part of the penalty.

Rails: Took about 10 hours (mostly reading documentation and Michael Hartl's book).

I think I prefer the Node approach simply because I know exactly what is going on under that hood. That said however, if I were a second developer coming into the code base I'd prefer the Rails approach because I'd have to learn "the framework" anyway and a widespread standardized one like Rails would be preferable to learn IMHO.

In conclusion: My numbers are highly individualized and don't tell the whole story of course. In related news, I actually threw both of them away and went back to Elixir/Phoenix, my third love. I'm quite happy there at the moment, and I don't anticipate moving again.

I think your comment about Rails being more accessible since other programmers would probably know it seems to be true of any framework but (anecdotally) I question whether the hypothesis works in real life. Any fairly complex application built using frameworks seems similar at first but then there’s all kinds of custom hacks and non-idiomatic code that needs to be explained anyway. I guess at least most programmers will be familiar with the shape of the code but how import is that?

I suspect that frameworks are ideal for coding boot camps where one doesn’t need to understand the details of what frameworks do as much to be productive. And for boot amp grads perhaps a familiar framework makes the code more accessible? IDK, I’m speculating.

You make some excellent points. With small apps I find I can drop into a rails codebase and be productive in minutes, but with any complex application there's enough hacks that it's almost never true. I think the framework/rails approach doesn't really do much for complex apps. Just my anecdata as well.

If you would like to explore more TypeScript ORM Libraries TypeORM comes to mind, and recently Zapatos made it into the frontpage and although it's new it looks like a nice simple but powerful alternative, much more close to raw SQL than TypeORM's clearly abstract layer/magic sause.

just my 2 cents

Also about more general web frameworks like rails and not ORM's I personally use NextJS with TypeScript, like Ant.Design for it's react components library, use Jest for testing + Eslint + Prettier.

That makes the most of my IDE, it's like having a second pair of eyes behind you in Real Time telling you every typo between Types, Linting, and other VSCode extension goodies...

Thank you. I have been eyeing NextJS quite seriously. I'll be needing to build a non-trivial frontend application pretty soon and I've been really wondering if I should got the NextJS route.

Have you used it with a big app?

No, sadly I have not, I can't vouch for it's production readiness for large-scale projects by telling myself first hand accounts of it, but I think you should checkout their latest blogposts/releases (Instant Reload, Dynamic Routing, etc)

I would just start with nice defaults like TypeScript, etc, which might help maintain a sane codebase in a large project.

But nextjs is really flexible, I find myself using it as my frontend -hammer-, maybe you should take my opinion with a grain of salt, but I've felt more productive coding with it than others!

I've used Firebase for a couple MVPs. It is very fast to develop on. It makes it easy to get a web or mobile app up in a few hours once you know what you're doing. It is definitely possible to be that fast using other technologies, but the learning curve is higher.

When you're just experimenting, trying to find market fit and get something to stick, Firebase is a decent solution. I haven't tried anything sizeable on it though. Eventually, I'm sure we would've migrated if we were successful enough.

I've seen this sentiment echoed a few times.

What are the advantages, or class of advantages, you get by moving on from Firebase to something stronger/harder?

I’m very junior, so take what I say with a grain of salt. But when I was determining if I should use Firebase or not, my biggest hesitation was simply that if Firebase ended today, I have to completely transition my application’s architecture.

Though that is unrealistic, even thinking about it was enough to make me want to be in full control of my application’s various layers.

You are bringing a senior consideration to the table; they can and do change their pricing at any time for any reason. Also, every single day you rely on them is a day that your architecture grows on that volatile cornerstone. Would look at https://parseplatform.org/ or https://postgrest.org/ as alternatives; turnkey, hosted solutions are available for each.

This is not a junior consideration and it's not unrealistic that firebase ends in say, the next 60 days which in terms of porting your entire application is essentially today. You're right to weigh this as a concern in evaluating approaches and everyone should, regardless of experience.

All PaaS providers work hard to couple you to their platform not (necessarily) for nefarious reasons, but because that's how abstractions work. You need to be continuously vigilant and aware of when and how you are tied, do a cost/benefit analysis and have risk mitigations for things like your original thought, so good for you.

I expect that our business logic would eventually have outgrown rules and cloud functions.

We used algolia to do full text and geo search. It worked, but I expected eventually it'd be really painful to reindex.

Same sort of thing with analytics. Firebase analytics is pretty powerful, but eventually we would want all our data mirrored somewhere we could use regular BI tools.

Then there is cost. We would have to weigh the cost of rewriting against the GCP bill, but at some point I expect it'd make sense.

Take this all with a grain of salt. We didn't get far enough to test any of these assumptions.

You can leverage Postgres full-text search, and GeoJSON/PostGIS support. Also for replacing algolia ElasticSearch comes to mind, that should work for most of those use cases.

For analytics and in the SaaS part you have both mixpanel and segment, and also some nice privacy-aware alternatives to Google analytics like fathom and some other indie startup.

Recently some open source segment alternative came along in HN...

MS PowerBI connects with lots of stuff, although it's paid ELK is open source Also metabase is a nice open source tool for BI.

Lots of tools!

What I was trying to say is that I was anticipating having to write and maintain a bunch of glue between firebase and these tools and maybe eventually outgrowing this glue.

A benefit of using a traditional SQL database is much of this stuff exists already off the shelf.

Maybe Firebase is there or is getting there. It's almost two years since I built something on it.

What's more, in the days of IaaS you can just as easily spin up a new server in some web interface, copy-paste a setup script you wrote once and after getting a coffee you come back to a working database server that does 90% of what you need. How much automation we use is independent of how much of this automation we decide to outsource.

EDIT: But hey, maybe that's just me not understanding the 2020 mentality of "Take my data and make it work" well enough

My ideal would be sqlite in the back, sqlite in the browser, with a sync table that uses database triggers for managing a log of the normal tables.

I've written a few more thoughts here: https://github.com/ericbets/erics-designs/blob/master/funcdb...

That's pretty close to the pouchdb + (couchdb|cloudant) combo...

If pouchdb was better maintained I feel like it would be much more popular. IBM should really sponsor it considering they even list it as a cloudant client

Excellent for deploying small auxiliary apps that are not necessarily your company’s core competency, or a side project.

> Honestly I think the Firebase comparison may be throwing some people off here because this is a SQL-based system

You're correct - we're building on top of Postgres so it's not a perfect comparison. And we're a bit early to even make it a fair claim.

I'm glad the crowd here has been so receptive. We were planning of launching in September, but it must have been picked up somewhere on the internet (thanks @habosa)

Just a counter datapoint, the Firebase comparison is what got me to click on the link and check it out. I'm a huge Firestore fan... here's what would get me to try something new:

- Better user permissions, ability to actually see my users and user permissions in a UI

- Better ability to choose conflict resolution logic

- Tied to that, ability to get "patches" from the server... right now if a change happens in my Firestore document, the server sends the entire document to the client. I'd love to be able to get a patch with just the changed data (and of course send patches to the server too, which it looks like you would support).

- Join queries to save me from needing to do multiple client -> server trips. If it's SQL I'm guessing you would support this.

Very interesting project! I'll be following your progress.

Could you expand on what you mean by "conflict resolution"?

Just that if two people make changes at the same time on different clients, I don't have much control over merging those changes together with Firebase (that I know of). I think it's just a "last change wins" type system.

This looks awesome. Realtime and easy access to Postgres. However, I'm not sure where the Firebase comparison comes in. You can't really even say it is a Realtime DB or Firestore comp since those are no-sql.

You might want to take off the Firebase comp until ready otherwise it distracts from the cool offering.

If you are adding Auth the Firebase way, remember to add an UI for JWT claims. Right now Firebase does not explicitly expose the list of claims UI through the dashboard so everytime you change permissions you have to programmatically iterate through everything if you want a holistic overview.

That's something I didn't know about. I'll save this comment for later and will have a think how we can give a better experience. Thanks for the tip!

Hi, do you plan to be a full BaaS? With Authentication, email confirmations, push notification etc?

Yes! We'll let the customers drive the priorities. Right now Authentication is our #1 focus

Will you provide or are looking into passwordless logins? I currently deal with a lot of support tickets with most of them being for reseting of passwords.

We are still planning our auth system, but this passwordless logins is something that is technically simpler than third-party logins, so I imagine we will. Feel free to create a github issue if you want to track our progress on this one.

Will you provide a Postgres instance too?

If not what provider would you recommend?

Yes, we already provide a Postgres instance (https://app.supabase.io)

There are solutions like Hasura or Fauna which give you a GraphQL server automatically.

How come Firebase still doesn't have some sort of agnostic querying layer?

I can't help but mention that I rather like the alternative approach of keeping steps 1-3, and using Phoenix LiveView (or C#'s Blazor?) for the rest. Not saying it's better or worse, but I quite like it.

If you don't pick a db, where do you store data ?

That would be step one out of the first three that I bother with :).

I read "skipping steps 1-3" instead of "keeping steps 1-3", my bad.

Do you have good resources for learning Elixir with OTP and Phoenix ? I also believe LiveView can deliver the 80% of functionalities for 20% of the price, so I'm very interested in learning it

I can also recommend Programming Phoenix. I've also heard good things about the LiveView courses on the Pragmatic Studio.

"Programming Elixir" and "Designing Elixir Systems with OTP" are also good, the former more so if you get started. From what I remember "Programming Phoenix" was enough to get started though, and these two books are better at getting deeper knowledge of Elixir/OTP.

Programming Phoenix 1.4 is probably the best book. Very friendly and easy to follow (and written by the Phoenix and Elixir creators)


1. I liked that the nodejs example shown here uses async await syntax. I don’t get why people still put callback hell examples in 2020.

2. What I really like about firebase is firestore is a scale to zero database. It doesn’t cost me anything if I have tiny amount of data and low frequency of users. This lets me spin up lots of small sites at no cost. With cloud run I have a real api server that also scales to zero. My last GCP bill was 3 cents. And yes, GCP was dumb enough to actually charge my credit card for that. Not sure why they don’t have a $1 minimum.

But I like what I’m seeing as supabase’s pitch.

I too have a project on firebase for a small business which costs me 0 but is vital for this business. I use firestore, cloud functions, auth and cloud storage for this project. Very nice for small apps.

Good luck! It’s great to see another BaaS alternative on the market especially the recent trend seems to be a lot of Jamstack (backend-less?)

Hopefully open source means better documentation too. Only places I know for Firebase are their docs and own YouTube channel, which can feel limited once you pass the “get-started” depth

co-founder here, happy to answer any questions. We're currently in alpha - app.supabase.io

We also have a lot more to build, so to reward you for your patience we are completely free right now

How does this compare to Postgraphile or Hasura? Or rather, how is it going to compare once you've had some time to get it out of alpha.

Good question - one we get asked often. We don't use GraphQL (but you can do deep-queries: https://supabase.io/docs/library/get#query-foreign-tables)

Under the hood we use PostgREST. At the same time, we may (also) offer GraphQL using Postgraphile, so people can bring their own client-library.

Differences from Hasura:

- Auth: we will use Postgres RLS

- Realtime: we don't use triggers, we use WAL (much more scalable)

- Only Postgres: Hasura mentioned they will build for other RDBMS. We're all in on PG and some of the more advanced features (replication, High Availability)

- UI/UX. We will build an interface like Airtable, so that even your non-techie friends can use it

Are you serious?

Two of your points are future feature promises. One of them is how Hasura plans to support other DB's besides Postgres, which they've already been exclusively working on & supporting for multiple years at this point.

Another mentions that you have plans to build a UI for non-technical people. This isn't different either, Hasura already has this too.

The triggers part is also downright wrong. Check the technical document on scaling to 1,000,000 live queries.

Discussions about the scalability and tradeoffs of WAL, Triggers, and the decided implementation (interval-based polling) are given:


"We experimented with several methods of capturing events from the underlying Postgres database to decide when to refetch queries."

"Listen/Notify: Requires instrumenting all tables with triggers, events consumed by consumer (the web-server) might be dropped in case of the consumer restarting or a network disruption."

"WAL: Reliable stream, but LR slots are expensive which makes horizontal scaling hard, and are often not available on managed database vendors. Heavy write loads can pollute the WAL and will need throttling at the application layer."

"After these experiments, we’ve currently fallen back to interval based polling to refetch queries. So instead of refetching when there is an appropriate event, we refetch the query based on a time interval. There were two major reasons for doing this:..."

You're 100% correct about their triggers and thanks for the link. I actually hadn't read it but I did today after waking up and it's quite fascinating.

Hasura is a great product and it deserves all the love it gets here. Tanmai and exchanged a few messages in February, so we're aware of what eachother are working on. I don't really feel this space is a player-takes-all market (see also nhost, graphile, subzero, amplify - all great products).

The parent comment "how is it going to compare" so I listed a couple of promises. It's up to us now to deliver on those promises - we're only a small team and our startup is 5 months old, but we move fast. I don't plan to disappoint. Sorry this reply took a few hours, I wanted to make sure I read your link (and slept) before commenting.

This might be pedantic of me, but conflating RLS and auth isn't a great look. RLS is a general purpose mechanism for constraining row operations, and auth has to do with usernames and passwords and session tokens.

This is what we are targeting: http://postgrest.org/en/v7.0.0/auth.html

We are still doing a heavy assessment of whether this model can be generalised for everyone. It covers the details of both authentication and authorization - we are just building a nice/easy way to enable this for everyone (probably using the same model as Postgraphile: https://www.graphile.org/postgraphile/security/)

i have seen oracle heavily employing RLS for authorization in e-business suite. and it made things so much easier.

Maybe they mean authorisation rather than authentication but like you, I am curious if they will elaborate further on how is Postgres RLS used.

Probably similar in an overall sense to Postgraphile et al[0], in case you haven't seen that - although I am also interested in the specifics relating to Supabase.

[0] https://www.graphile.org/postgraphile/security/

As far as pedantry goes, wouldn't conflating RLS and AuthN be wrong, but AuthZ sort-of correct? At least, that's my understanding.

Hey there, I like the product and think your business model is admirable. I would like to make a suggestion that you consider offering some support for your enterprise patrons. $1000+/mo is very steep for an ad even if it was a popular product. imo that should come with some amount of white glove treatment.

That's a good call - let me update!

edit: OK added - https://github.com/sponsors/supabase

About the choice of the database. Why did you decide to go with PSQL instead of a NoSQL database as Firebase does? Doesn't that affect your scalability?

Otherwise, great project!

Actually we chose Postgres specifically because it is scalable. I had to switch from Firebase in my previous company because we hit some scaling limits. Postgres handles the scale no problem - it's crazy the performance improvement we saw (and it's ACID)

The trickiest part was the realtime functionality - https://github.com/supabase/realtime

How much cheaper than Firebase do you expect to be when you launch?

Good (leading) question :)

It's hard to answer because we're so early, so the most honest answer I can give is: i don't know.

Firebase just changed their pricing to "pay as you go". I need to dig into their pricing to give a good comparison but I think we will be much cheaper for the IAAS aspect - we will try offer more value via the dashboard (i.e. we want to build an Airtable-like interface for non-techies to use), and charge on that aspect.

One more thing - we personally find predictable pricing to be an important feature of IAAS and therefore will probably price differently to firebase's usage model.

Or you can do what many usage-models do, make a calculator and usage on common use-cases.

Any link to deployable dockerized versions?

Theres a version here: https://hub.docker.com/r/supabase/supabase-dev

But we will be releasing a much more robust version with docs soon!

How does app handle authn?

see my other comment

> We want to nail the auth and we're looking at leveraging Postgres' native Row Level Security. This is a tricky task to "generalize" for all businesses but we have some good ideas (we're discussing in our repo if you want to follow - https://github.com/supabase/supabase)

This is really interesting. I spent a while a few weeks ago looking around for a serious, open-source BaaS effort backed by... anyone.

I've been influenced by code generators like xo/xo [0] and sqlboiler [1] recently (so you can have type-safe APIs and you still manage/own the resulting code).

My bet is that you can generate an entire API _and_ basic CRUD browser UI from a db schema. I've been working on a code generator that does this. It can currently generate an entire Go REST API with CRUD operations from a PostgreSQL database. Next is to generate a React/TypeScript UI and add auth support.

The advantage a project like mine has over this is that it defines a standard API specification and you can build an APIs and UIs against that standard. Right now the only API is Go, but I'd like to build a Java one too and would be open to community submissions.

Same thing goes for database dialect. Right now it's PostgreSQL only, but this kind of code generation can be done on any database once a driver is added.

Won't link to it here because the whole thing is WIP but it's on Github. I see this kind of project as a long-term base for rapidly building any sort of db-based applications in the future.

[0] https://github.com/xo/xo

[1] https://github.com/volatiletech/sqlboiler

> My bet is that you can generate an entire API _and_ basic CRUD browser UI from a db schema

That's what we're building! We already have the auto-APIs, and auto-documentation. The auto-UI is in the works. It essentially is Airtable, but backed by Postgres

Also, we will provide "meta" REST APIs for your database. Want to programmatically add columns or fetch all your database types? No probs.

As an eng manager, my concern with Supabase is the complexity of the systems involved. There's PostgREST (Haskell) and then a bunch of Elixir on top? Not only is that two services (plus the database) but they're in languages that aren't easy to hire for if I have to do maintenance or development.

I don't think this will be a problem for attracting startups (who will take up the new hotness) or for large enterprises (where you can package all the services into a black box that you support). And I get that you're building on some well-known existing software (PostgREST, primarily) but I'm still concerned about the operational overhead.

My preference (as an eng manager) would be to operate (and develop hooks for) a single generated server in Go, Java, or Python rather than manage a real BaaS.

Still, it's a cool project and good to have open source _and_ backed by YC. I passed it along to friends whk are looking for this kind of thing. Best of luck.

Here's [0] the docs site for the code generator I'm working on. It generates a Go REST API with basic CRUD operations (plus filtering, pagination and sorting).

Next up is authentication and a React UI.

[0] https://eatonphil.github.io/dbcore/

I built something a while ago call bam[0] that basically did that. You specces your application in an xml file and it built out the admin ui, orm code, and db schema. I took a bet on a soon to be dead framework and ultimately didn't have time to continue the project, but it worked pretty well.

That makes sense. I'm hedging against dead technology by keeping the database and code generation parts separate from the templates. This way you can have templates for any languages so long as they speak a shared (HTTP) API specification.

I did the same, I just didn't feel like going back and rewriting all the templates.

Whoopse! forgot the link earlier https://github.com/jimktrains/bam I wish I had included an example in there though :(

The advantage with firebase web sockets is that when they inevitably get blocked by firewalls you can always say "Would you mind whitelisting google? Thanks."

isn't this more like "An open source Firestore alternative" right now. Do you plan to bring other features as well?

I'm a big fan of firebase and use it whenever I can. The reason it's appealing it because it's the suite of tools and how well they work together for bootstrapping (auth, firestore, storage, analytics etc). No single feature by itself is useful to me.

I would definitely switch if this atleast had (auth + db + file storage + functions).

See my comments here: https://news.ycombinator.com/item?id=23320443

tldr: > We've been building furiously since January but (not surprisingly) we haven't yet reached feature parity. But we will :)

I'm a big fan of Firebase and have seen a lot of projects get up and running very quickly and scale as well.

However, I think there are an increasing amount of reasons why you want to have your data stored in a SQL database that you can access. The open source tooling being built around SQL (usually Postgres) as a standard is becoming better and better and it's going to be hard for Firebase to compete with all those offerings. If I run a Postgres database I can instantly have tools like Hasura [0], Metabase [1] along with others that add a ton of value out of the box. However, maybe those tools will also integrate with Firebase.

Anyway, my point is that this is best of both worlds, so great to see!

[0] https://github.com/hasura/graphql-engine [1] https://www.metabase.com/

The thing that's really invaluable for me is the firebase javascrpt implementation. Saving the state in a web app and new Objects being addable/editable offline with firebase automatically synchronising when network is available with me doing absolutely NOTHING is absolutely insane. Having just the API isn't really going to cut it.

This looks to just be the "real time" DB? How is this different from using GraphQL subscriptions via Hasura/Postgraphile, or RethinkDB?

Also, it seems like it's missing everything else that Firebase provides. Authentication, authorization, storage, hosting, etc. To me, Firebase's value prop is more than just a database.

AFAIK Firebase doesn't actually offer any real authorization do they? It was some time ago I last built a project with them, but I remember having to create a custom "roles" attribute in the Realtime DB for users.

I answered this over here: https://news.ycombinator.com/item?id=23320443

tldr: > We've been building furiously since January but (not surprisingly) we haven't yet reached feature parity. But we will :)

Sounds good. Would it make sense to put some info about your roadmap on your main marketing pages? There's pros and cons of that, but for positioning sake, I'd recommend it, and given that it's still an alpha, I think that mitigates the expectations of feature completeness.

Good idea. This post caught us off-guard but I’ll update the site as soon things settle down

Finally, waiting for something along the lines of these. Firebase is kind of the poster child for vendor lock in, and has been around for a long time. It's time we have some healthy competition.

This has been around for a while: https://parseplatform.org/

thought parse died a while ago with fb?

In some ways it was born when that happened :-).

I keep meaning to give it a go self-hosted, but I have this horrible feeling it needs MongoDB, but I should check that again.

I tried it when Buddy hosted it for free and its quite nice. I prefer it to Firebase because it felt like the API was simpler and more discoverable. I think the fact that no company is "pushing it along" means it stays simple.

When FB/Twitter/whoever dropped it, they open sourced the platform completely — they just weren’t offering managed hosting anymore, and they weren’t maintaining it. This looks like a maintenance effort by the community that has formalized itself

They have developed a robust release cadence; it is for real and used broadly in production. https://github.com/parse-community/parse-server/releases

Genuinely curious, what are the hot spots for vendor lock-in?

I’m currently building an MVP for a large client and I’m thinking of replacing Firebase for a custom back-end if the project grows. I think with the current size it would be easy but I want to avoid passing the point where it becomes hard to undo.

[Supabase cofounder]

Firebase is really only bad if/when you decide to move - usually because of scaling/performance issues. For example, you can only query one document per-second. Once you decide to migrate away, it's very painful - but the truth is all migrations are painful.

This is one of the reasons we chose postgres. If you want to migrate away, you can just "take your database" with you. PG can scale with the best of them.

Edit: I said "you can only query one document per-second" but this is supposed to be "you can only query each document once per-second". Sorry!

>you can only query one document per-second

That's not true. You can't query any single document more than once per second, which is very different. You can certainly query many separate documents per second.

Sorry - I intended to write that but I guess I had a freudian slip. You're 100% correct - each document only. Let me put a note

"you can only query one document per-second".. uhm, that doesnt sound right, it would make firestore useless for all intents and purposes. The only limitation I know that sounds close to that is 1 _write_ per second for the _same_ document.

So are you saying there is no way to get a database dump of firebase, so you have to query both your old and new databases (and deal with all the resulting consistency issues) for a transition period unless your service wants downtime?

Firebase has backups to cloud storage, there is no problem obtaining a dump of your data, but that's only one part of a migration

Take a look at Hasura - GraphQL out of the box on top of solid relational database (Postgres)

In Hasura your still need a 3rd party service such as Firebase or 0Auth for Authentication. Also, it depends on Serverless functions for doing any business logic.

This is a misconception. Hasura doesn’t depend on serverless functions for doing business logic.

If you are comfortable writing GraphQL, you can add your own custom GraphQL server to Hasura for business logic.

If you are comfortable with REST APIs (or something that already exists), you can use Hasura Actions to define GraphQL types and call your REST endpoint to perform business logic.

Now where this server is hosted is totally upto the user. It can be serverless functions or can be a monolith server (in language of choice) hosted anywhere.

Hasura just needs the endpoint :)

> you can add your own custom GraphQL server to Hasura for business logic

> If you are comfortable with REST APIs (or something that already exists)

This is an order of magnitude worse than simply having to "depend on serverless functions". You're saying for any real-world business logic (read: non-crud writes to the database, every app has these), you need to re-implement another graphql server? or simply have something that "already exists" to solve your problem? Isn't that what Hasura is for?

> Now where this server is hosted is totally upto the user.

I think the dream of Hasura-like products is to not have a bunch of servers everwhere that need to be managed and coordinated. This is the beauty and power of Postresql. Containing our business logic in it is ideal. At the very least, containing our business logic in 1 Hasura instance would be 2nd best. Calling out to some other rest api or custom implemented GraphQL server defeats the purpose of a self-contained GraphQL layer. If some of the data lives elsewhere, sure. But what if the data just lives in our database?

Perhaps what they meant is it requires application code (server or severless) for business logic mutations, instead of surfacing database functions as RPC.

This was the particular reason I moved away from Hasura. Business logic mutations in SQL are too powerful to give up and replace with JavaScript in opinion.

We have this feature high up in our priority: https://github.com/hasura/graphql-engine/issues/1514

We already support user defined PG functions to be exposed as graphql queries so this is a natural extension.

I believe Hasura is working towards being able to write Actions in SQL.

How are you surfacing RPCs from the database? That sounds interesting.

A key insight I've arrived at over past few months is GraphQL is most useful for multiple teams to have "one" API endpoint and they can then work on front end features without blocking each other, as well as write different clients (and therefore many different possible "queries") without having to constantly re-write the back-end API. In other words, ultimate flexibility on the front end when you don't know what queries you need and you need to divide work across teams/developers.

This comes at the cost of complexity of implementing a performant GraphQL server. Perhaps another instance of a thing that gets really popular because it's good for large corporations rather than being ideal for solo devs or small projects. Like NoSQL and other scalability optimizations vs sql etc. Hasura seems the best I've come across so far at making this easy/quick.

But in many situations, at least for my use-case and likely many others, simply writing well thought out SQL queries are both simple, easily iterated upon, require fewer layers, infrastructure, code etc. In the Hasura scenerio, custom SQL queries inevitably are needed anyways, Hasura mostly gets you the "crud" operations quickly.

If you construct your "screens" or "pages" of a Next.js web-app, for example, as collection of RPC's defined in SQL either requested through "/api" routes integrated with the framework, or even better (when possible) directly accessing your database calls [using raw sql of course] within component "getServerSideProps" and "getStaticProps" calls[0], there isn't really anything simpler in my view. No ORM, no Graphql.

By RPC, I simply mean a stored sql function, or view. Executing this is as simple as:

  // some api endpoint, which then queries database:

  await db('select pay_balance($1)', [leaseId]);
  return res.status(200).send("ok");
The stored function is written in SQL, not javascript http requests, and has full access to all the data in the database to implement transformations on this data.

Reading "The Art of Postgresql"[1] currently and a constant refrain is to push as much business logic into the database as possible. When you do this, you avoid so much complexity vs having this spread around in application code or cloud functions or microservices, etc. ACID, transactions, etc. all come with it.

[0] https://twitter.com/adamwathan/status/1246144545361997829 this uses a query builder, I wouldn't even do that. Just raw sql everywhere.

[1] https://theartofpostgresql.com/

The tagline: "Turn thousands of lines of code into simple queries" couldn't be more impactful in my thinking of late, and it's been extremely beneficial for making me more productive with simpler, less error prone code.

> In Hasura your still need a 3rd party service such as Firebase or 0Auth for Authentication

That's false. You can write your own service or use some open source authentication system, like you would in a normal website. Or use JWT tokens. It doesn't have to be a 3rd-party service.

nhost.io basically does this. it packages Hasura with its own auth backend, as well as an S3 compatible storage backend. They're adding lambda functions soon.

Effectively a firebase alternative using open source tech


I actually built auth directly into Postgres similar to this: https://github.com/sander-io/hasura-jwt-auth (not sure if this is the best idea, but it works well)

For business logic you can trigger (Events) webhooks to any service or stack of your choice or use GraphQL schema stitching. And now there's a new Actins feature as well.

https://github.com/daptin/daptin, DIY self hosted firebase and more!

Another solid alternative is Realm. We have used it for several projects, and the experience was light years ahead of Firebase.

But since they got acquired by MongoDB things have gone a bit quiet, so who knows?

AWS amiplfiy seems like its trying to do something similar

I eventually settled on Hasura but I did evaluate amplify. It looked to me like it was too complicated to break away from their db so I gave up on it (it was possible, but it felt like you had all the complexity of AWS IAM permissions, but just to access data). I didn’t go too deep so I don’t really remember the details but Hasura seemed the better approach (still a happy user of it).

Gotta say I need to take a better look at Hasura

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact