Following this intently, because Firebase has no true competitor, let alone an open source one. Nice work so far.
Both databases are super limited and put all the burden of work to the client(s) for anything beyond very simple queries.
The functions have some of the worst cold starts I've experienced. Until very recently the dev experience was terrible but Firebase local dev was released a couple of days ago so this should be solved.
As for alternatives, AFAIK there is nothing that replicates the whole platform but there are better options for the individual parts.
- FaunaDB instead of Firestore.
- Vercel for edge static hosting + cloud functions.
- Netlify for everything except the databases
The specific issue I was running into: https://github.com/googleapis/google-cloud-node/issues/2942
It displays data from the last 3 days of tests. Google Cloud has a max peak of 60 seconds for cold starts. It's the second worst after Azure.
The point of using serverless is handling massive traffic spikes efficiently and cheaply but 10 concurrent connections doesn't seem very massive.
Cloudflare Workers are doing much better than the rest in this respect (see the max value, the graph is misleading) but these have serious limitations (eg: max 50ms of CPU time). This makes them a bad fit for most situations.
In my current project which requires lowest possible latency I'm having more success with Fly.io. Instead of having cloud functions you create docker images which are distributed on their regions and scale up/down based on demand on that region.
These decisions are alienating to me. I've been working on migrating my heroku mern stack app to firebase but now I'm having serious second thoughts. I was initially attracted by the free and flat fee plans. I won't ever sign up for a service that demands absolute liability without any way to put hard limits on expense. Without a way to put an absolute maximum limit on expenses, whether $10k or 0.01, I'll never upgrade my account on firebase.
I've spent this week reviewing some alternatives and standing up everything myself on a vps is more attractive then losing sleep over the possibility of my crappy code bankrupting me on firebase.
Both are indeed correct: you must now provide a credit card to use Cloud Functions, and you do get a free tier on the Blaze plan.
In fact, the Blaze plan comes with a free tier that is bigger than the limit of the free plan was, but you can no longer use it without entering a credit card.
This change came while adding support for newer node.js versions. Cloud Functions now uses Cloud Build to create its containers, and while Cloud Build does offer a free tier, you'll have to enter a credit card to use it.
So if someone builds a business that makes very little money but uses a ton of compute, the cloud provider should subsidize that? And conversely if you make a ton of money using barely any compute, you should pay your cloud provider a premium? This business model makes no sense.
Ideally a business provides a service it’s customers are happy to pay the costs of, and not just the costs, but also a significant profit. That leads to a virtuous circle where the business is incentivized to continue to invest in the service, adding more servers, engineers and support personnel to add more valuable features for its customers.
Otherwise it’s just a matter of time before they post that farewell letter and give you the phase out date the servers are shutting down.
Admittedly, some other solutions were shitty and ate into the battery, but this wasn't the case for everything and forcing everyone onto a proprietary platform if they need an always connected background service is not "a great product," but a defective one.
I get your point that people expect differently from Google, but Google brought that upon themselves by marketing Android as an open source alternative to iOS. Now they've got the market share and developers, they're no longer interested in marketing it this way and are now apparently more interested in cementing the walled garden.
The battery issue is a valid concern, but I don't buy that this policy is purely to fix that problem. They took the opportunity to make sure people are passing all of their data through Google services in order to have full functionality on their devices.
They should charge from the beginning.
> When Google charges it's a problem
Where is that ? It's a problem when it's abrupt.
As a business, if you're getting something for free, assume someone else is subsidising you, as there's no such thing as free lunch.
> if you're getting something for free, assume someone else is subsidising you, as there's no such thing as free lunch
And how does this matter to the commend that OP made which I was replying to ?
only if it means they'll kill the service, or "you're the product"-ize its users
If your business depends on Google, you are taking serious risk. Even if you have a contact within Google who can champion your case, a loss of your Google services could end your business.
It's really sad that Google and other big corps do this to people. They destroy their livelihoods on a whim. We really have to think of a way to divest ourselves from them (GCloud, AWS, I;m looking at you too)
The lasting impact of Firebase is it proved to the world that being opinionated about the database in order to provide good tools is a viable business model, rather than force ORMs onto people to try and appease every different database flavor under the sun.
Good luck to you guys, I like the trend.
> we're supposed to ignore all of the security work that went into making that database software for the past however many decades, and turn it into a dumb storage medium instead
I couldn't agree more
A UI without a persistent root/admin connection lingering in the wild cannot leak such a root/admin connection, can it?
How many retail corporations have had breaches resulting in huge credit card dumps that would not have happened if they had not been using frameworks with persistent root/admin database connections outside of their internal office network?
Today, with managed/containerized DBs and Microservices and share nothing architecture, I’ve seen most apps use their own database instance, in which case the access control stuff seems to be more of an obstacle than useful. e.g. just the other day, I ran into issues because the user used for schema migration did not have access over some tables in my apps database in Postgres since it has table level access control.
Think about, for instance, a company's internal customer database (including billing, etc), accessed with internal tools. CSR/Support people need to be able to see billing status but not credit card numbers, for example. Different levels of management need to be able to generate reports, but again not get credit card numbers.
For these use cases the opposite is true. The further you take auth and permissions away from the database/devops admins, the more times you're going to need to reinvent auth and permissions boilerplate for every internal tool under the sun.
Which is a pointless exercise if none of the tools in question are ever going to be accessible outside of the company's VPN anyway.
Active Directory / LDAP are not abandoned technologies, after all.
The counter argument is, would it better to have 10, 20, 100 such possible situations mulling around the building every day, or just one? Maybe if there's just one, you put enough effort and people into the one to get it right. That's the pitch for AD / LDAP being used for all auth and permissions, and I think a compelling one at that.
Off the top of my head, I'd say security might be improved but ease of use may suffer. Interested in hearing other's thoughts on the matter.
Eventually you're going to want to bring that data all together and now you have to have data architects and engineers developing etl solutions and then your resulting data warehouse is going to be that singular database you were trying to avoid from the start, with all the overhead of building it and maintaining it on top of all the maintenance of the source silos.
I'm not sure how to avoid this. The need to answer business question favors a single, queryable DB whereas the need to keep applications siloed and abstracted away favors multiple application data stores.
Any big downsides to doing things this way? I'm no DBA so I appreciate any insights.
I guess it's also just more instances to manage in total, instead of 1 + n it's 2n.
You _could_ run the dbs on the same instance but not using the same DB process, via containerisation perhaps. That way you could reduce your instance count while still keeping (somewhat) operational separation..
Very glad to see people working on alternatives.
IMHO though, what makes Firebase special is the glue they use: Account management and libraries come for free so you can start thinkering on the interesting parts. Firebase is not just Firestore, it is a set of tools that work together seamlessly.
When this is achieved, your own product feels mature as if you are working on implementing a product on top of FB or other established platform and everything "boring" is handled by someone else.
In my mind, Firebase is a very flexible CMS with sane defaults.
I'm not familiar with the project historically, as I've only been using it for about 2 months, but it seems to me like there's been an increase in activity since the start of the year - like the package modularization, new UI components, and new docs (which are still pretty bad to be honest). I totally agree it's sub-par to Firebase, and if you stray anywhere off the beaten path you're completely on your own (i.e. not using React, seemingly (!!)), but I do think it has the potential to become a viable alternative.
My primary issue with the platform is the choice of a NoSQL database, which, in my view, just doesn't match the majority of application requirements - if you want to do any sort of text search, you have to spin up a whole sodding Elasticsearch domain, which are expensive as hell for a new product and take literally 20-30 minutes every time they're updated with an `amplify push` command. I also waited over a week for one to be deleted not too long ago. That's why my plan is to replace the API with a Lambda function running Postgraphile with some JWK logic to use Cognito for authN, while keeping the idM and file storage stuff.
I’ve found Hasura+Cognito+S3 as the closest equivalent to Firebase. It’s not as neatly wrapped up into a single product but it’s pretty close.
Dashboard, realtime stuff, etc are great too. RESTful APIs I can of course get with PostgREST , which is insanely excellent, so the value I'm looking for is to have everything managed, from hosting/storage, to security, to all the other annoying nitty-gritty that I'm likely to get wrong.
> We want to nail the auth and we're looking at leveraging Postgres' native Row Level Security. This is a tricky task to "generalize" for all businesses but we have some good ideas (we're discussing in our repo if you want to follow - https://github.com/supabase/supabase)
> RESTful APIs I can of course get with PostgREST
That's what we use! See our supporting libraries:
https://github.com/supabase/postgrest-py (coming soon)
Also, I'm a long time user and a huge fan. My previous company is featured on their docs (blog post: https://paul.copplest.one/blog/nimbus-tech-2019-04.html#api-...)
Check the person is who they say they are (authenticate), and then check they're allowed to access the thing they want to view (authorize).
The first is quite easy to abstract, the latter is basically custom to most applications (for different definitions of "custom").
Just thought you should know, when I read "Firebase alternative", I got excited about something else.
I'm even more excited for your project in that case. I hope I can make the time to build a prototype, right now I'm doing the same for retool, and I feel very spoiled for good choices all of a sudden when it comes to backend abstractions.
We have client libraries for PostgREST, which then gets installed as a dependency in our main client library.
What have you used instead of Auth0?
But generally I'm of the opinion that auth tightly integrated into the platform has a lot of advantages. My dream is to avoid (almost) all glue between my components, I think Firebase became successful in big part because of that.
• Generates CRUD operations for given entities
• Generates SQL migration files
• Gives developer full control after project is generated(nothing added on top of raw SQL/Go)
• Minimal dependencies(basically lib/pq and grpc)
• TLS out of the box
We need more people to use it, cause the original developer is going on maintenance mode and we're trying to strengthen the community. It rocks!
Take Postgres. You write code in SQL, a programming language unlike any other mainstream programming language. Instead of writing code with for loops and if statements, you write it with joins and where clauses. On top of that, Postgres has a "magical query optimizer" that takes your SQL and figures out how to execute your query. Unless you have a good understanding of indexes and how they impact the query optimizer, you'll have a hard time getting Postgres to be fast. I still regularly say WTF when optimizing Postgres queries even though I've been doing it for years. That's not to mention there's tons of database specific terminology like tables, rows, schema, etc, that you have to learn before you can become an effect user of Postgres.
As much as HN likes to bash it, I think Mongo has done the best job of creating an easy to use database. With Mongo, you can store JSON and you can pull JSON out. Of course, I would never use Mongo personally.
I'm hoping that Supabase is able to bring about the next-phase of databases by not just making it possible to make a database fast, but by making it easy to do so.
> I'm hoping that Supabase is able to bring about the next-phase of databases .. by making it easy to do so
We chatted to a lot of developers at the start of this year. Most of them thought that Postgres was amazing and wanted to use it, but they still chose other options (like Firebase) because they were easier. At that point we made "database UX" our main focus.
Then I saw that THIS was written in elixir.
That made me take this much more seriously!
Basically we are refactoring it so you can pipe your database changes anywhere - webhooks, kafka, serverless, slack etc
Soon! We are joining the next YC cohort and we will hire after that
Only some parts are elixir. The full stack is here: https://supabase.io/humans.txt
I am a senior dev (18.5y experience in total), currently focusing on Elixir and Rust (3.5y with Elixir so far, learning Rust fast and currently making an Elixir<=>Rust bridge for the sqlite database).
I'll be checking out your page every now and then. Do post in the "Who's hiring?" thread when you are ready to hire! I'll be looking for you there as well.
I feel like the key is in the choice for the realtime backend (Phoenix/Elixir). I've built real-time firebase apps (JS) in the past and today my choice would probably be Phoenix. It also saves many LoC (80-90%?), and it's a joy to work with. Reduces the JS fatigue and opens up a new paradigm that makes programming fun again.
We migrated to Postgres and started with Triggers/NOTIFY. But that also has some limitations (8000 byte payload limit), so we implemented the Elixir server: https://github.com/supabase/realtime
This just seems to be an Open-Source implementation of Firebase's "Realtime Database", which is kind of superseded by the new "Firestore", which is much more popular.
However it's a smart business move, as many products are locked into the "Realtime Database", so this could provide an easy exit door to Open-Source software.
tldr: > We've been building furiously since January but (not surprisingly) we haven't yet reached feature parity. But we will :)
For me the MVP, before I could use it for my commercial projects, would be: DB+auth. At that point, I could switch - and probably would.
Also, kudos to you for being the diametric opposite of a 'useless' startup. Not only would I use this once I could, I'd talk it up to everyone in my space.
We like building things that people will use. The number of features that Firebase offer is vast so it might take us some time, but we will get there and we’ll make sure we do it in a way that benefits the opensource community
Let me assess this, with the tip of my napkin:
1. linkedin 'firebase' + amazon (current employer) -> 695 people.
2. 10% of above = 70 FTE
3. They started (googled): They founded Firebase as a separate company in September 2011 and it launched to the public in April 2012.
1+2+3 = It's impossible you will have feature parity in next 10 years.
BUT it's possible you can achieve pareto parity, leaving off the table most complicated 20%. Good luck :)
The team behind it is awesome (one is a python core dev), and it's a FOSS, but they have been using it internally for some time.
If there is something missing in our GraphQL feature set, feedback is very welcome :).
E.G: Django ORM will let you define the schema, and be the single source of truth for you DB and your REST API.
Is this a replacement for the whole suite of Firebase offerings (hosting, auth and data store) or just a subset?
What’s the migration pathway for someone currently on Firebase?
> data store
Yes - our data store is just Postgres so you can migrate in/out - no lockin
> hosting, auth
Not yet. We want to nail the auth and we're looking at leveraging Postgres' native Row Level Security. This is a tricky task to "generalize" for all businesses but we have some good ideas (we're discussing in our repo if you want to follow - https://github.com/supabase/supabase)
> What’s the migration pathway for someone currently on Firebase?
This is also something we're building - a migration tool. Mapping NoSQL to RDBMS is complex, but something I tackled in my previous company. We'll build it so that Firebase and Supabase run "in parallel", which we can do since both have realtime functionality. And then when you're happy, you will be able to switch off Firebase
This looks really cool! Honestly I think the Firebase comparison may be throwing some people off here because this is a SQL-based system, which means there's a huge base of existing tools/techniques/knowledge to build from.
I like any tool which makes it easier to build an app. It's 2020 and we still start every app like this:
* Pick a database
* Spin up a server
* Connect the DB to the server
* Create a REST API for the server so the client can talk to the DB
* Somehow make that secure enough
* Write a bunch of CRUD code on the client
If the answer is “just call an api to handle your data” sure that works for POCs/Small apps. But I’m a bit hesitant to put all my businesses data inside a proprietary data store that I don’t control and which isn’t collocated with my app (Ie every data store request makes a round trip to the google server hosting the firebase app... maybe not a huge deal if the app itself is in google cloud?)
Disclaimer: I’ve only used firebase tangentially, haven’t built an app from scratch with it.
I don't want to ever write this sort of boilerplate code again, I want it generated from a DB schema or something similar . There are better things to spend time on.
They each have their purposes. Django will get you to market fast with all the features you need, and keep you there for a long time. But it forces (through its library structure) and encourages (through its common patterns) ridiculously tight coupling.
I work on a Django Monolith now that runs an org needing to grow beyond it. We need something not quite offered by a Django library, or we need to move something with different scaling needs out to another service - it's all miserably difficult, because they followed all the Django recommended best practices The framework controls you, you don't control it.
Now we're back to writing boilerplate to enforce a semblance of clean architecture onto it. It's kind boring sometimes, but once a domain gets refactored out of the Django way, our ability to deliver features quickly and safely in that domain goes up 10x.
The "Fat Models" recommendation is one of the most destructive in my opinion: https://django-best-practices.readthedocs.io/en/latest/appli..., along with Django Rest Framework "Model Serializers". A JSON serializer that talks directly to the database is just madness.
If all your business logic is in models then of course your app is going to be completely unmaintainable and it's going to take developers weeks to do things that should normally take a couple hours.
There is definitely a real problem in the Django community where lots of people have recommended architecting apps in bad ways, so then you get developers who want to implement the app the "standard" way that Two Scoops or whatever recommends. But Django itself is still a great tool, you just need to be willing call out your teammates if they're unable to think for themselves.
I think it's good to get familiar with a variety of ways of building applications over a career so you can pull from the best of them to, again, be able to focus on _business problems_ you have and can solve. To me that includes a sustainable development model and system architecture.
The Django model is a representation of the database state. It's an infrastructure-layer object. It's is _very_ tightly coupled to the database.
Your business needs should not be so coupled to the database! While it is very helpful for an RDB to accurately model your data, a database is not an application. They have different jobs.
(The TL;DR of the following paragraph is "encapsulation and interfaces")
Your business logic belongs in the "service layer" or "use case layer". The service layer presents a consistent interface to the rest of the application - whether that is a Kafka producer, the HTTP API views, another service, whatever. Your service layer has sensible, human-understandable methods like "register user" "associate device with user", whatever. These methods are going to contain business logic that often needs to be applied _before_ a database model ever exists, or apply a bunch of business logic after existing models are retrieved in order to present a nice, usable, uncluttered return value. Your service layer hides ugly or unnecessary details of the database state from the rest of the application. Consumers shouldn't care about these details, they shouldn't rely on them (so you can fix or change without breaking the interface) , and they very probably should not be presented direct access to edit whatever they want.
If you do not do this and instead choose the fat models method all of the following will happen:
1. You will repeatedly write that business logic everywhere
where you use the models. You'll write it in your serializers, your API views, your queue consumers/producers, etc. You'll never write it the same way twice and you damn sure won't test it everywhere.
2. You'll get tired of writing the same thing and you will add properties or methods on the model. This is the Fat Model! This might be appropriate for convenience property or two that calculates something or decides a flag from the state of the model, but that's it. As soon as you start reaching across domains and implementing something like "register device for user" on the user model, or the device model, you are just reinventing a service layer in a crappy way that will eventually make your model definition 4000 lines long (not even remotely an exaggeration).
3. Every corner of your application will be updating the database - via the model - however it wants. They will rely on it! Whole features will be built on it! Now when it's time to deprecate that database field or implement a new approach, too bad. 20 different parts of your app are built on the assumption that any arbitrary database update allowed by the model is valid and a-ok.
1. Each domain gets a service layer, which contains business logic, but also presents an nice reliable interface to anything else that might consume that domain. This interface includes raising business logic errors that mean something related to our business logic. It does not expose "Django.models.DoesNotExist" or "MultipleObjectsReturned". It returns an error that tells the service consumer what went wrong or what they did wrong.
2. The service layer is the only thing that accesses or sees the Django models aka the database state. It completely hides the Django models for its domain from the rest of the application. It returns dataclasses or attrs, or whatever you want to use. The models are no longer running rampant all over the application getting updated and saved willy nilly. The service layer controls what the consumers in the rest of the application can know and do.
You will write more boilerplate. It will be boring. You will write more tests. It will be boring. But it will be reliable and modular and easier to reason about, and you can deliver features and changes faster and with much less fear of breakage.
Your business logic will live one place, completely decoupled, and it can be tested alone with everything else mocked.
How your consumers (like API views)turn service responses and errors into external (like HTTP) responses and errors, lives in one place, completely decoupled, and can be tested alone with everything else mocked.
Your models will not need to be tested because they are just a Django model. They don't do anything that's not already 100% tested and promised by the Django framework.
1. If you're not using models, it's a lot of work to stay fast.
If you've got a Customer instance, and you want to get customer.orders, you've got a problem if it's not lazy. If it's a queryset, you get laziness for free, if it isn't you have to build it yourself. God help you if you have anything even remotely complicated. You also need trapdoors everywhere if you want to use any Django feature like auth, or Django libraries.
2. You have to build auth/auth yourself
Django provides really nice auth middleware and methods (user_passes_test).
3. Service methods only do things something else should be doing.
You might be doing deserialization, auth/auth checks, database interactions, etc. All of that stuff belongs at a different layer (preferably abstracted away like @user_passes_test or serializers).
4. The model exposed by Django and DRF is actually pretty good, and you'll probably reimplement it (not as well)
The core request lifecycle is:
request -> auth -> deserialize -> auth -> db (or other persistence stuff) -> business stuff -> db (or other persistence stuff) -> serialize -> response
We've reimplemented all of those layers, and since we built multiple domains we reimplemented some of them multiple times. It probably would've been better to just admit "get_queryset" and the like are good ideas.
5. Entities are a poor substitute for regular objects and interfaces.
We've mostly ended up wrapping our existing models in entities, but just not implementing most of the properties/fields/attributes/methods. But again, we have to trapdoor a lot, we have trouble with laziness and relationships in general, and we have a lot of duplicate code in our different domains.
6. We have way too many unit tests.
Changing very small things requires changing between 5-10 tests, each of which use mocks and are around a dozen lines at least. Coupled with the level of duplication, this has really slowed us down. They also take _forever_ to run.
FWIW I think you're right about jamming too much into models; I think that works at a small scale but really breaks down quickly. I think at this point, my preferences are:
1. Ideally, your business logic should be an entirely separate package. It shouldn't know about HTML, JSON, SQL, transactions, etc. This means all that stuff (serialization, persistence) is handled in a different layer. Interfaces are your friend here, i.e. you may be passing around something backed by models, but it implements an interface your business logic package defines.
2. The API of your business logic package are the interfaces you expose and document. The API of your application is your REST/GraphQL/whatever API--that you also document.
3. Models should be solely database-specific. If you're not dealing with the database and joins and whatever, it doesn't go in models and it doesn't go in managers.
4. Don't make a custom user model .
5. Serialization, auth, and persistence should be a declarative and DRY as possible. That means class-level configuration and decorators.
6. Bias strongly against unit tests, and rely more strongly on integration tests. Also consider using them during development/debugging, and removing them when you're done.
Does that seem reasonable to you? I spend a lot of time thinking about this stuff, and I would like my life to be less about it (haha) so, any insight you can give would be super appreciated.
Each action looks like
1. Request arrives into the app, auth happens using DRF on the API view. This is all using Django & DRF built-ins.
2. In the API view: request data gets serialized using DRF serializers, but no calculated fields or model serializers or other BS. JSON -> dict only. The dict does not have models in it, only IDs: profile_id, reservation_id, whatever. Letting the "model Serializers" turn a JSON location ID into a Location model is how you get 10 database queries before you've done _anything_. At this point we don't care if the location_id is valid. We are just serializing.
3. Still in the API view: Dict dump from the serializer gets shoved into whatever format you're going to send to the service layer. For us this is often an attrs/dataclass. If we're calling the "Reservations Service" method "create reservation", we pass in location_id, start time, end time, and the User model. The User model in this case is breaking our policy of not passing models through the service boundary, but it's the one exception for the entire code base, because it's too useful not to take getting it for free from DRF's user auth. We would be basically throwing it away then re-calling for it in the service layer which is dumb.
4. Call the Reservations Service layer. The service layer is going to do n things to try to create the reservation. If it needs to insert related records, like in a transaction, cool. Its job is to provide a sane interface for creating a Reservation, and whatever related side effects, not to only ever touch the Reservation model/table and nothing else. The base of our Domain is Reservation, creating a ReservationReceipt and a ReservationPayment are entirely within scope. Use the Payment model directly to do this if there's zero extra logic to encapsulate, or create a Payment service if you have a ton of Payment-creation logic you need to extract/hide from the Reservation service. You can still manage it all in a transaction if you want. The point is that the caller (the API layer) doesn't see this. It only sees that it's calling the Reservation Service.
5. The Reservation service will either return a dataclass/attrs objects representing a successful Reservation created, or raise a nice business error like ReservationLocationNotFound (remember when you passed in a bad location id to the API, but we didn't want to check it in the API layer?)
6. API View takes the service response & serializes it back, or takes the business error and decides which HTTP error it should be.
The only thing that turns me off about Rails and .NET is that it's a big learning curve _because_ the projects are so feature complete. Since I haven't worked professionally in either of them, I haven't been able to get myself to learn either well enough to do rapid prototyping.
I started a new app and I got a basic Users and Sessions model with endpoints running (I like this test because dealing with passwords and API tokens requires writing some code instead of generating it all, but also leverages libraries).
These numbers aren't useful by themselves since you don't know the details of the work I did, but they may be useful side by side:
Express/TypeScript: Took me about 24 to 30 hours to get it all implemented. I started with no ORM (node-postgres) then tried out Knex, then settled on Slonik.js. Because of that I had to re-write some stuff a few times. However, in a bare-bones "framework" this is part of the penalty.
Rails: Took about 10 hours (mostly reading documentation and Michael Hartl's book).
I think I prefer the Node approach simply because I know exactly what is going on under that hood. That said however, if I were a second developer coming into the code base I'd prefer the Rails approach because I'd have to learn "the framework" anyway and a widespread standardized one like Rails would be preferable to learn IMHO.
In conclusion: My numbers are highly individualized and don't tell the whole story of course. In related news, I actually threw both of them away and went back to Elixir/Phoenix, my third love. I'm quite happy there at the moment, and I don't anticipate moving again.
I suspect that frameworks are ideal for coding boot camps where one doesn’t need to understand the details of what frameworks do as much to be productive. And for boot amp grads perhaps a familiar framework makes the code more accessible? IDK, I’m speculating.
just my 2 cents
Also about more general web frameworks like rails and not ORM's I personally use NextJS with TypeScript, like Ant.Design for it's react components library, use Jest for testing + Eslint + Prettier.
That makes the most of my IDE, it's like having a second pair of eyes behind you in Real Time telling you every typo between Types, Linting, and other VSCode extension goodies...
Have you used it with a big app?
I would just start with nice defaults like TypeScript, etc, which might help maintain a sane codebase in a large project.
But nextjs is really flexible, I find myself using it as my frontend -hammer-, maybe you should take my opinion with a grain of salt, but I've felt more productive coding with it than others!
When you're just experimenting, trying to find market fit and get something to stick, Firebase is a decent solution. I haven't tried anything sizeable on it though. Eventually, I'm sure we would've migrated if we were successful enough.
What are the advantages, or class of advantages, you get by moving on from Firebase to something stronger/harder?
Though that is unrealistic, even thinking about it was enough to make me want to be in full control of my application’s various layers.
All PaaS providers work hard to couple you to their platform not (necessarily) for nefarious reasons, but because that's how abstractions work. You need to be continuously vigilant and aware of when and how you are tied, do a cost/benefit analysis and have risk mitigations for things like your original thought, so good for you.
We used algolia to do full text and geo search. It worked, but I expected eventually it'd be really painful to reindex.
Same sort of thing with analytics. Firebase analytics is pretty powerful, but eventually we would want all our data mirrored somewhere we could use regular BI tools.
Then there is cost. We would have to weigh the cost of rewriting against the GCP bill, but at some point I expect it'd make sense.
Take this all with a grain of salt. We didn't get far enough to test any of these assumptions.
For analytics and in the SaaS part you have both mixpanel and segment, and also some nice privacy-aware alternatives to Google analytics like fathom and some other indie startup.
Recently some open source segment alternative came along in HN...
MS PowerBI connects with lots of stuff, although it's paid
ELK is open source
Also metabase is a nice open source tool for BI.
Lots of tools!
A benefit of using a traditional SQL database is much of this stuff exists already off the shelf.
Maybe Firebase is there or is getting there. It's almost two years since I built something on it.
EDIT: But hey, maybe that's just me not understanding the 2020 mentality of "Take my data and make it work" well enough
I've written a few more thoughts here: https://github.com/ericbets/erics-designs/blob/master/funcdb...
You're correct - we're building on top of Postgres so it's not a perfect comparison. And we're a bit early to even make it a fair claim.
I'm glad the crowd here has been so receptive. We were planning of launching in September, but it must have been picked up somewhere on the internet (thanks @habosa)
- Better user permissions, ability to actually see my users and user permissions in a UI
- Better ability to choose conflict resolution logic
- Tied to that, ability to get "patches" from the server... right now if a change happens in my Firestore document, the server sends the entire document to the client. I'd love to be able to get a patch with just the changed data (and of course send patches to the server too, which it looks like you would support).
- Join queries to save me from needing to do multiple client -> server trips. If it's SQL I'm guessing you would support this.
Very interesting project! I'll be following your progress.
You might want to take off the Firebase comp until ready otherwise it distracts from the cool offering.
If not what provider would you recommend?
How come Firebase still doesn't have some sort of agnostic querying layer?
Do you have good resources for learning Elixir with OTP and Phoenix ? I also believe LiveView can deliver the 80% of functionalities for 20% of the price, so I'm very interested in learning it
"Programming Elixir" and "Designing Elixir Systems with OTP" are also good, the former more so if you get started. From what I remember "Programming Phoenix" was enough to get started though, and these two books are better at getting deeper knowledge of Elixir/OTP.
2. What I really like about firebase is firestore is a scale to zero database. It doesn’t cost me anything if I have tiny amount of data and low frequency of users. This lets me spin up lots of small sites at no cost. With cloud run I have a real api server that also scales to zero. My last GCP bill was 3 cents. And yes, GCP was dumb enough to actually charge my credit card for that. Not sure why they don’t have a $1 minimum.
But I like what I’m seeing as supabase’s pitch.
Hopefully open source means better documentation too. Only places I know for Firebase are their docs and own YouTube channel, which can feel limited once you pass the “get-started” depth
We also have a lot more to build, so to reward you for your patience we are completely free right now
Under the hood we use PostgREST. At the same time, we may (also) offer GraphQL using Postgraphile, so people can bring their own client-library.
Differences from Hasura:
- Auth: we will use Postgres RLS
- Realtime: we don't use triggers, we use WAL (much more scalable)
- Only Postgres: Hasura mentioned they will build for other RDBMS. We're all in on PG and some of the more advanced features (replication, High Availability)
- UI/UX. We will build an interface like Airtable, so that even your non-techie friends can use it
Two of your points are future feature promises. One of them is how Hasura plans to support other DB's besides Postgres, which they've already been exclusively working on & supporting for multiple years at this point.
Another mentions that you have plans to build a UI for non-technical people. This isn't different either, Hasura already has this too.
The triggers part is also downright wrong. Check the technical document on scaling to 1,000,000 live queries.
Discussions about the scalability and tradeoffs of WAL, Triggers, and the decided implementation (interval-based polling) are given:
"We experimented with several methods of capturing events from the underlying Postgres database to decide when to refetch queries."
"Listen/Notify: Requires instrumenting all tables with triggers, events consumed by consumer (the web-server) might be dropped in case of the consumer restarting or a network disruption."
"WAL: Reliable stream, but LR slots are expensive which makes horizontal scaling hard, and are often not available on managed database vendors. Heavy write loads can pollute the WAL and will need throttling at the application layer."
"After these experiments, we’ve currently fallen back to interval based polling to refetch queries. So instead of refetching when there is an appropriate event, we refetch the query based on a time interval. There were two major reasons for doing this:..."
Hasura is a great product and it deserves all the love it gets here. Tanmai and exchanged a few messages in February, so we're aware of what eachother are working on. I don't really feel this space is a player-takes-all market (see also nhost, graphile, subzero, amplify - all great products).
The parent comment "how is it going to compare" so I listed a couple of promises. It's up to us now to deliver on those promises - we're only a small team and our startup is 5 months old, but we move fast. I don't plan to disappoint. Sorry this reply took a few hours, I wanted to make sure I read your link (and slept) before commenting.
We are still doing a heavy assessment of whether this model can be generalised for everyone. It covers the details of both authentication and authorization - we are just building a nice/easy way to enable this for everyone (probably using the same model as Postgraphile: https://www.graphile.org/postgraphile/security/)
edit: OK added - https://github.com/sponsors/supabase
Otherwise, great project!
The trickiest part was the realtime functionality - https://github.com/supabase/realtime
It's hard to answer because we're so early, so the most honest answer I can give is: i don't know.
Firebase just changed their pricing to "pay as you go". I need to dig into their pricing to give a good comparison but I think we will be much cheaper for the IAAS aspect - we will try offer more value via the dashboard (i.e. we want to build an Airtable-like interface for non-techies to use), and charge on that aspect.
One more thing - we personally find predictable pricing to be an important feature of IAAS and therefore will probably price differently to firebase's usage model.
But we will be releasing a much more robust version with docs soon!
I've been influenced by code generators like xo/xo  and sqlboiler  recently (so you can have type-safe APIs and you still manage/own the resulting code).
My bet is that you can generate an entire API _and_ basic CRUD browser UI from a db schema. I've been working on a code generator that does this. It can currently generate an entire Go REST API with CRUD operations from a PostgreSQL database. Next is to generate a React/TypeScript UI and add auth support.
The advantage a project like mine has over this is that it defines a standard API specification and you can build an APIs and UIs against that standard. Right now the only API is Go, but I'd like to build a Java one too and would be open to community submissions.
Same thing goes for database dialect. Right now it's PostgreSQL only, but this kind of code generation can be done on any database once a driver is added.
Won't link to it here because the whole thing is WIP but it's on Github. I see this kind of project as a long-term base for rapidly building any sort of db-based applications in the future.
That's what we're building! We already have the auto-APIs, and auto-documentation. The auto-UI is in the works. It essentially is Airtable, but backed by Postgres
Also, we will provide "meta" REST APIs for your database. Want to programmatically add columns or fetch all your database types? No probs.
I don't think this will be a problem for attracting startups (who will take up the new hotness) or for large enterprises (where you can package all the services into a black box that you support). And I get that you're building on some well-known existing software (PostgREST, primarily) but I'm still concerned about the operational overhead.
My preference (as an eng manager) would be to operate (and develop hooks for) a single generated server in Go, Java, or Python rather than manage a real BaaS.
Still, it's a cool project and good to have open source _and_ backed by YC. I passed it along to friends whk are looking for this kind of thing. Best of luck.
Next up is authentication and a React UI.
Whoopse! forgot the link earlier https://github.com/jimktrains/bam I wish I had included an example in there though :(
I'm a big fan of firebase and use it whenever I can. The reason it's appealing it because it's the suite of tools and how well they work together for bootstrapping (auth, firestore, storage, analytics etc). No single feature by itself is useful to me.
I would definitely switch if this atleast had (auth + db + file storage + functions).
However, I think there are an increasing amount of reasons why you want to have your data stored in a SQL database that you can access. The open source tooling being built around SQL (usually Postgres) as a standard is becoming better and better and it's going to be hard for Firebase to compete with all those offerings. If I run a Postgres database I can instantly have tools like Hasura , Metabase  along with others that add a ton of value out of the box. However, maybe those tools will also integrate with Firebase.
Anyway, my point is that this is best of both worlds, so great to see!
Also, it seems like it's missing everything else that Firebase provides. Authentication, authorization, storage, hosting, etc. To me, Firebase's value prop is more than just a database.
I keep meaning to give it a go self-hosted, but I have this horrible feeling it needs MongoDB, but I should check that again.
I tried it when Buddy hosted it for free and its quite nice. I prefer it to Firebase because it felt like the API was simpler and more discoverable. I think the fact that no company is "pushing it along" means it stays simple.
I’m currently building an MVP for a large client and I’m thinking of replacing Firebase for a custom back-end if the project grows. I think with the current size it would be easy but I want to avoid passing the point where it becomes hard to undo.
Firebase is really only bad if/when you decide to move - usually because of scaling/performance issues. For example, you can only query one document per-second. Once you decide to migrate away, it's very painful - but the truth is all migrations are painful.
This is one of the reasons we chose postgres. If you want to migrate away, you can just "take your database" with you. PG can scale with the best of them.
Edit: I said "you can only query one document per-second" but this is supposed to be "you can only query each document once per-second". Sorry!
That's not true. You can't query any single document more than once per second, which is very different. You can certainly query many separate documents per second.
If you are comfortable writing GraphQL, you can add your own custom GraphQL server to Hasura for business logic.
If you are comfortable with REST APIs (or something that already exists), you can use Hasura Actions to define GraphQL types and call your REST endpoint to perform business logic.
Now where this server is hosted is totally upto the user. It can be serverless functions or can be a monolith server (in language of choice) hosted anywhere.
Hasura just needs the endpoint :)
> If you are comfortable with REST APIs (or something that already exists)
This is an order of magnitude worse than simply having to "depend on serverless functions". You're saying for any real-world business logic (read: non-crud writes to the database, every app has these), you need to re-implement another graphql server? or simply have something that "already exists" to solve your problem? Isn't that what Hasura is for?
> Now where this server is hosted is totally upto the user.
I think the dream of Hasura-like products is to not have a bunch of servers everwhere that need to be managed and coordinated. This is the beauty and power of Postresql. Containing our business logic in it is ideal. At the very least, containing our business logic in 1 Hasura instance would be 2nd best. Calling out to some other rest api or custom implemented GraphQL server defeats the purpose of a self-contained GraphQL layer. If some of the data lives elsewhere, sure. But what if the data just lives in our database?
We already support user defined PG functions to be exposed as graphql queries so this is a natural extension.
How are you surfacing RPCs from the database? That sounds interesting.
This comes at the cost of complexity of implementing a performant GraphQL server. Perhaps another instance of a thing that gets really popular because it's good for large corporations rather than being ideal for solo devs or small projects. Like NoSQL and other scalability optimizations vs sql etc. Hasura seems the best I've come across so far at making this easy/quick.
But in many situations, at least for my use-case and likely many others, simply writing well thought out SQL queries are both simple, easily iterated upon, require fewer layers, infrastructure, code etc. In the Hasura scenerio, custom SQL queries inevitably are needed anyways, Hasura mostly gets you the "crud" operations quickly.
If you construct your "screens" or "pages" of a Next.js web-app, for example, as collection of RPC's defined in SQL either requested through "/api" routes integrated with the framework, or even better (when possible) directly accessing your database calls [using raw sql of course] within component "getServerSideProps" and "getStaticProps" calls, there isn't really anything simpler in my view. No ORM, no Graphql.
By RPC, I simply mean a stored sql function, or view. Executing this is as simple as:
// some api endpoint, which then queries database:
await db('select pay_balance($1)', [leaseId]);
Reading "The Art of Postgresql" currently and a constant refrain is to push as much business logic into the database as possible. When you do this, you avoid so much complexity vs having this spread around in application code or cloud functions or microservices, etc. ACID, transactions, etc. all come with it.
 https://twitter.com/adamwathan/status/1246144545361997829 this uses a query builder, I wouldn't even do that. Just raw sql everywhere.
The tagline: "Turn thousands of lines of code into simple queries" couldn't be more impactful in my thinking of late, and it's been extremely beneficial for making me more productive with simpler, less error prone code.
That's false. You can write your own service or use some open source authentication system, like you would in a normal website. Or use JWT tokens. It doesn't have to be a 3rd-party service.
Effectively a firebase alternative using open source tech
For business logic you can trigger (Events) webhooks to any service or stack of your choice or use GraphQL schema stitching. And now there's a new Actins feature as well.
But since they got acquired by MongoDB things have gone a bit quiet, so who knows?