Hacker News new | comments | show | ask | jobs | submit login
Why I’m dumping Firebase for Web (lugassy.net)
246 points by mluggy 125 days ago | hide | past | web | favorite | 118 comments



Firebase taught me, for a B2B2C kind of app:

1) Relational databases are more flexible: you model your data according to the actual "relations" between them, instead of how you want to query them. This makes it easier to add new views/queries to your application.

2) Joins are necessary

3) In a very short time, you need complex authentication logic and it is very hard to do it without looking at the requested (and related) data directly. Best way to do this is a good-old rdbms + server app

4) Moving more logic from servers to clients forces you to be more careful about client versions, duplicate logic among different clients (iOS, Android, Web, Dashboard)


> 2) Joins are necessary

> 4) Moving more logic from servers to clients forces you to be more careful [...]

Back to the very old architecture of client-server-database instead of client-database that was proven dumb long time ago. The client-server-database architecture was happily ditched by the crowd that builds dynamic websites and calls them "web applications", so now everybody rediscovers that it actually is a poor idea for client to talk to database directly.


I agree with this, to a point. But what's missing in this discussion are the things that firebase (and others) brought to the table that were new; real time push, and adminless backend ("serverless" term they and other others try to popularize fails to describe what it actually is).

I'm personally thankful the nosql hype has finally subsided, and everyone recognizes it for what it is, a tool in the box. Not everyone has made that realization yet, as other posts in this thread indicate.

Server pushing data enables some really useful things, the archetypical example being online chat. Adminless backends are also very attractive to a lot of people, in particular front end devs who just want to build, not do a poor (insecure) job admining servers. I personally lament this "everyone's a server admin" culture we have come into in recent years.

But those innovations aren't enough to make up for the deficits the author, and many here are describing. Like others have said, use Firebase for what it's intended, and when you really need to scale, be prepared to roll your own solution.


http://phoenixframework.org/ actually has a built in websocket layer called channels. (its the inspiration for action cable and django channels) . Unlike deepstream, or socketcluster, it has a longpolling fallback as well as multi node clustering.

Its a shame more people don't realize what a gold mine of a piece of technology it is.


Hi, I'm the main author of SocketCluster. SC does support multi-node clustering; in fact, you can deploy with a single command and scale with a single command. SC has been able to auto-scale on Kubernetes for almost one year now. See https://github.com/SocketCluster/socketcluster/blob/master/s...

You're right that neither SC not Deepstream support long polling fallbacks but there are some very good architectural reasons for this. Both SC and Deepstream used to support HTTP long polling fallbacks via engine.io in the past but both projects independently decided to stop doing it after years of experimentation and feedback.

Now that WebSockets are well supported in all major web browsers and HTTPS has become even more prevalent (so corporate proxies are no longer a problem), the extra load balancing complexity, performance costs and the DoS vulnerabilities of long polling are no longer worth it.


on rereading my comment, I realize I was ambiguous. I meant that phoenix channels was the only multiclustering solution or websockets that has long polling support.

I can definitely appreciate the additional complexity of supporting the fallback. I imagine that elixir's OTP and STM functionality lowers the bar for handling the extra state management across multiple nodes.


Corporate proxies are still an issue. Now they have something like SSL inspection proxies. Essentially the proxies can see everything even if it is https. And websockets does not work with these proxies even over https.


These do exist but they are not common; it's an insignificant proportion of users; in a representative sample of all web users, you're probably more likely to encounter users with JavaScript disabled in their browser altogether than users who are being snooped on and restrained by their company in this way.

Also, for these proxies to work, the company has to have access to the user's machine to install their own root CA certificates onto it; so generally, this issue is limited to corporate workstations and not BYO mobile devices and personal devices.

It's only a big problem if you want to support users from a specific company which happens to be a major customer (like a corporate SaaS solution); but if that particular company is such a big user of your product then they can always change their proxy policy to allow WebSockets from your domain.

I think that there are few enough of these companies that they should be the ones to adapt to new technology and not the other way around. It's important for open source projects and companies to set positive standards and not always bend to the will of corporations; especially when it comes to ethically-questionable practices.

You can still offer a REST API without real-time features for those users. The cost of long-polling is that bad. It's very easy to DDoS.


Phoenix Channels are great, but they're totally not a replacement for Firebase. You need to do a lot of coding on top of Channels to get what Firebase offers you.

Not that that takes anything away from your argument, but given that this topic is about Firebase, readers could get the impression that it offers the same.

That said, I like Phoenix Channels too, and the long polling support is a must-have for us. The "very good architectural reasons" for not doing this, as quoted by @jondubois in a sister comment, are nice, but if that means that 10% of our audience can't use our product then that's a show stopper. I love that Phoenix takes these cases seriously.

That said, Phoenix Channels are a mess when it comes to documentation and terminology. It's built on only two terms, "socket" and "channel", both of which mean 2 to 3 different but related things.

(background: at https://talkjs.com we recently moved our realtime stuff from Firebase to Phoenix Channels)


Dokku implementation for any Linux server hosted locally or in the cloud. Not affiliated just really like the product.

https://ashleyconnor.co.uk/2016/03/06/deploying-a-phoenix-ap...


I'm one of the developers on Cloud Functions for Firebase and would be happy to answer any questions you have. I'll try to keep my responses to my product alone since I think other team members are more qualified to answer those.

Your feedback is both fair and something we're diligently working to fix. In response to each of these points:

1. [Local testing only works for HTTPS]. For better or worse, our product is built in layers. There's the core Cloud Functions product and the "for Firebase" add-ons. You can already see some public changes we've made to the core Cloud Functions emulator that include some support for event-handling Cloud Functions as well as improvers in debugger support. We definitely want better "in house" support and are in the user testing stages of some generational leaps in local testability with the Firebase toolchain. You can reach me at my handle at google.com if you'd like to be considered for some of the user testing.

2. [Debugging feels like a murder mystery]. Yes, this is another thing we are working on. Cloud Functions integrates with Stackdriver Debugging[1]. There are very real issues with the integration today. TL;DR: If you have a steady stream of requests, it will work. If not, the ephemeral instances won't be alive/unthrottled long enough to fetch watchpoints. This is a huge concern to many of us within Google and we're working hard to improve the way it works.

3. [No cron jobs]. Again, a huge feature request and a high priority. You don't need to use your own machine; you can use GAE's free tier to kick off cron jobs for you[2]. Still, a more tightly integrated solution is obviously ideal and is coming.

Closing: We take these issues very seriously. I'm sorry we don't have the solution to your problems today, though all three line up with active development on the team. We are focused on delivering infrastructure required for mature apps with high volume and mission critical traffic. Keep an eye out for our product as it crosses into generally available and beyond.

[1]: https://cloud.google.com/debugger [2]: https://github.com/firebase/functions-cron


When marketing interacts with engineering, it never ends well. And serious engineering tech homepage should advertize sincerely the limits and tradeoffs of its tech. That's how engineers work, and it's something sales people will never understand. We trust techs and people because they explain the limits, not in spite of it.

There's no free lunch. Whenever you pick a technology, understand what are the downsides. If you don't see any, then you probably don't know enough. Nosql is cool, unless you start needing transactions or database level data integrity checks ( hints : you always end up needing it at some point). Cloud hosting is cool, until you realize it's way more expensive for your needs, and that network is going to cost you a fortune. Serverless is great, but then what are you going to do to ensure business rules are exactly the same for all your clients, at all times, etc. etc.


I.. walked the same road, but 2 years ago :(

Here's an alternative I found: https://deepstreamhub.com/open-source/

It works really good. You will like it. Full control + easy interface. Got me started really quickly.


Has anyone here used gun extensively? https://github.com/amark/gun it seems to position itself as a competitor to firebase, and wouldn't have many of the issues mentioned in the OP


Wow, this is the exact thing I've been thinking of writing, but so much better. I'm so glad this exists, brb trying it out right now


Just noticed this today, sorry for the delay. I'm the author, anything I can do to help out? Let me know!


Hey -- nice job! I have a question: is there a way to use MongoDB in the backend?


Yes! Somebody in the community just recently built this:

https://github.com/sjones6/gun-mongo

https://github.com/sjones6/gun-mongo-key

Anything else?


Looks interesting, anyone has experience with it?


Well the code itself doesn't look interesting at all, it is ugly as hell. Lacks comments, poorly organized, full of ugly names.

e.g.: from chain.js:

// TODO: BUG! Handle plural chains by iterating over them.

(obj_has(next, 'put')){ // potentially incorrect?

if(u !== next.put){ // potentially incorrect? Maybe?

A lot of comments like: // ugly hack for now.


Lol. I opened a random src-file and started to wonder wtf this is.

Source: https://github.com/amark/gun/blob/master/src/state.js


I can't believe so many js devs have regressed to single-letter variable names. This has been a code smell for generations :(


I stopped using firebase DB about a year back. Now I prefer:

1) API backend using Google Cloud Endpoints deployed in GKE. Still using firebase Auth.

2) React SPA served from firebase hosting. Using FirebaseAuthUI

One thing I continue to be astonished about is how difficult it remains to build a real apps (ssl, auth, persistence, packaging, deployment) as a one person team.


Keep your stack simpler with an integrated framework instead of going for the latest fancy cloud-everything tech and it is a lot easier building/deploying as a one man team.

What is wrong with a simple rails/spring/django project hosted on heroku/elasticbeanstalk? Ssl, auth, persistence, packaging and deployment are all covered and easy.

Of you want a bit more control you can swap heroku/eb for a vps and use ansible for deloyment which is my preferred method for one person projects


In my spare time I use asp.net with knockout.js and typescript. Hosting on Azure. With that setup, I can deploy from my editor which is perfect for a one man team.


I run a totally different stack, but by my count there at least 8 technologies that I use in most every web app (HTML, CSS, Javascript, a backend language, a front and/or backend framework, a relational database, NGINX, Linux). And that does not include domain specific technologies, or general skills like image editing, design, application architecture, etc. needed to create a functional app.

Edit: 9 since you specified ssl too (thank you Let's Encrypt), and 10 for real-time support, and 11-20 for all the devops stuff that I should know more about.


I think You chose flexibility over simplicity. Simplicity on google infrastructure would be to use:

- completely use firebase

- completely use AppEngine

So, you would get easy packaging, ssl, deployment, infrastructure maintenance..


I used with Firebase a while ago, it was quite pleasant to use initially.

I think that all these SaaS/BaaS services do speed up development initially but they slow things down in the long run. I had the same issue with Amazon Elastic Transcoder (for videos) and SNS; once you need to split out different environments (development, staging, production) and then scale to support more regions and start splitting up jobs into more streams; it can get really difficult to the point that you wish you were running your own service instead.


Some years I was on a team that needed to remove a Firebase dependency.

We were using Firebase for simple read-only queries and our dataset wasn't changing much, so statically served nested JSON files/folders were a great substitute.

In case it helps anyone else out, here's the script that automated the creation of those files from a Firebase data dump: https://github.com/zackbrown/firebasic


There is a way to cache data in memory.. I don't know why this isn't documented:

https://stackoverflow.com/questions/38423277/does-firebase-c...


Heh, I asked that question.

I implemented the solution in a Firebase wrapper and it works great.

Anyway, Google is solving this in JS by implementing persistence.

https://github.com/firebase/firebase-js-sdk/issues/17


Firebase documentation is very lacking and their support is slow but it is an amazing product. The problem is when people try to use it generally for anything and everything.

Firebase is not meant to be a backend for any Web app. It is a special tool for specific problems. If you just use it without evaluating if it is a good fit for your problem then you'll certainly run into a wall.


(deepstream employee here) despite its flaws, Firebase has some great ideas, e.g. its permission language that strongly inspired us when building https://deepstreamhub.com/. But there are some aspects, many mentioned by the OP that send us down a different route:

- Serverless is a great goal, but only works for low complexity apps. We've designed deepstream explicitly not to take the server away, but as a layer that sits in between backend and frontend and allows servers and clients to connect and exchange data.

- The same is true for auth. DeepstreamHub comes with similar built in auth and usermanagement functions as Firebase, but more importantly comes with a Webhook auth strategy that forwards any login data and associated connection info (IP, Cookies etc) to a HTTPS endpoint of the users choosing. Depending on the returned status code the connection is either granted or denied. In addition the auth server can also return client specific metadata that's either forwarded to the client upon login or used within permission rules to determine access rights

- Querying. This has so far not been a strongpoint of deepstream either, but we'll soon be releasing a blazingly fast realtime graphql implementation to address this. More details here: https://deepstreamhub.com/blog/deepstream-3.0-release/


I never heard of deepstream. But now I hate if for spamming the product camouflaged as opinion. Both on the medium article and here.


I tried to move away from Firebase to Deepstream a little while ago and sadly, Deepstream lacks the polish that Firebase has. It seems that Firebase is definitely the best solution out there if you truly want to avoid configuring your own servers, scaling and worrying about infrastructure related aspects. I think as long as developers inform themselves of the limitations of Firebase, it's a great product.


As far as scalability goes the open-source solution doesn't seem too promising. Beeing capped at 10.000 connections seems like a very low limit to me, especially considering that the blog post complained about 100.000 connections not beeing enough.


As a firebase user, what would be the easiest path to map my multi-nested firebase DB (like 'chatrooms/room1/msgs/msg1`) structure to deepstream?

Since I think DS doesn't support more than one level of nested data. Is there an easy way?


deepstream has a concepts of records, similar to individual documents within an object oriented database. Each record can contain any arbitrarily nested datastructure


Yes but can I subscribe to a nested datastructure (say room1/messages) like in firebase. Not the whole record (room1)?

Also, if I want to maintain a list of records (say list of chatrooms). I see that I will have to use the "Lists". But I think I read somewhere that the size of that list cannot exceed some limit. Is that true?

Thanks!


I'm surprised that nobody has mentioned yet how slow Firebase can be. Especially updates via the REST API can take upwards of 1 second. And that is for a really small database of a couple MB at most (and the updated value itself being a tiny string).

I really don't know what they are doing, it seems they have made all the wrong decisions engineering-wise. I sincerely hope some team at Google is really busy right now rewriting everything.


I'm fine in general with Firebase, but feel like its overhyped. It's nice to get little cloud functions, onWrite database hooks, and somewhat realtime data, but if a competitor rose up and took at stab at frequently noted pain points, they could probably grab market share pretty easily.

My pain points:

Cloud functions debugger = -1.

Can't tell you how much fretting I have dealt with using async.each and some buffer overflows or some other low level Node.js error that never makes it to the debugger. And the debugger isn't a plaintext log. It's a tacky UI where each line is an element, the "logs" are 600px high, and whenever you scroll a little you lose your spot on longer JSON logs.

I also strongly, strongly dislike how easy it is to accidentally delete data from storage. That red X icon is wayyy to easy to accidentally click and daily backups only don't give you wiggle room to recover on heavily used apps. This makes it sadly difficult to use their database view to present quick things to others without building a UX for it.

Let's not talk about how setting a field to `new Date()` just loses the value entirely instead of throwing an error or defaulting to ServerTime.


Nothing beats Django and the Django Rest Framework. In my opinion, it's easier to code a RESTful API in Django than to configure a 3rd party service.


Try Falcon.


Looks interesting. What back end do you use?


A great blog post.

I feel that Firebase has a lot of services - and one should seriously consider what you are getting for the cost too.

Firebase Authentication - Basically free. You can pay for something like Auth0 and use the Firebase Admin SDK to add Firebase Users on authentication through Auth0 then let that service be your source of truth. Alternatively, you can spin up your own auth service on a firebase function using Passport.js and, again, roll your own solution.

Firebase Hosting - Pretty solid. Nothing crazy great and nothing crazy bad. Easy to deploy and host static files to the edges.

Firebase Database - What firebase needs is a proper NoSQL, but you can get that in the Google Cloud Services. As a Real-time layer for clients, it is pretty good. Not so great if your corporation is on AWS. But, there are options (like deepstream.io) that might require a bit more setup work. How long until every cloud provider has a real-time data layer offering though? Firebase won't hold this space forever.

Firebase Functions - Now, there are a number of comparisons around the functions vs AWS, Azure, etc.. Functions are what they are and they all perform better/worse in various ways. Deploying Firebase Functions is WAY easier than spinning up Lambdas. That said, The Serverless framework makes that much easier too... Serverless actually supports many cloud providers.

If you want to have a real-time app, Firebase, Deepstream, SocketCluster, etc... None of those should probably be your "core" database anyway. They are all great data-sync solutions.

I think that people using Firebase should be aware of growth paths and if they think they will need a feature, build for it. Abstract your code so you can move away from Firebase or extend it with other services. Treat the Firebase Database as the handy real-time layer, but consider storing your data in a proper database elsewhere.

Remember, you can do a complex query in another database and then use the ID results to point the client to Firebase records to sync to (for instance). This, of course, applies to other solutions too. Heck, Deepstream provides RPC calls so you can make those queries then subscribe to your results (if you need real-time results).


I loathe Googles various scaling limits.

"You need to ask a Google person to scale further, but hey we're Google and we do everything we possibly can to avoid human contact." Bleh.


I had to increase our quota of layer 7 load balancer backends recently on GCP, submitted the request and the quota was increased within minutes. YMMV though, I guess?


The issue is that googles reputation is to avoid all contact. This gives less than zero faith in their commitments to support.


For Firebase it's actually not awful. We have 3 Firebase projects each with their caps raised to 100k concurrent connections, and it was a decent experience. Always the same person handling my support case etcetera, not too bad.


Really love Firebase for prototypes, really agree with many of these points that it is not for use in production work.


I'm using (for amy.ac a math tutoring platform) Firebase now since about 10 month and I'm quite happy with it (now). After about ~3 month I became pretty tired of it since I came from a SQL background and I had no way of dealing with fancy queries. At this stage I almost switched to Deepstream (Sorry deepstream but non of your examples worked, so no Deepstream for me. Also I need todo my own load balancing, etc.).

Why I still use firebase has 2 major reasons. 1) Cloud Functions 2) Using Firebase-DB as my model and never query anythings from firebase. 3) I really don't want to touch docker/aws/load balancing/etc.

In my case, the client simply listens/writes to a particular model (DB-path). A cloud function will wake up, manipulate the model and falls a sleep.

That pretty much means, every view-model a user can see is pre-calculated and stored in the DB.


I love Firebase, I think it's a great tool. I actually gave a talk about Firebase when I was an intern at Google in 2015. I've liked it for a while. I think it's a simple model that actually lets you build most things very quickly and in a fun way (since it's all real-time).

Here's my response to some of your points

Only easy if you implement FirebaseAuthUI, which has a UI that is out of context and intolerable if you care about UX. Phone verification for example (albeit free, requires ugly ReCaptcha) and the AccountChooser (albeit great concept) opens in a completely Google design.

To me this is a completely acceptable security requirement. IIRC the account chooser has other related elements including accepting permissions and TBH I think there's good reason that there's a standard UX for auth stuff. It is a lot easier to just tell people, "Only trust things on google.ca, and the UI should look like Google too", etc.

Social plugins change all the time and to use the most up to date (like this awesome “Continue as {{Name}}” button) you have to implement the providers’ own JavaScript and other necessities and do a lot of work to make them work together.

You have to do this using Web too. Firebase doesn't make it harder.

You can’t easily add claims (groups, roles, feature toggles, etc.) to user’s json web token, meaning you have to create and supplement each authentication with a call to the Real-time database.

I actually like this feature request. I see it more like something that Firebase could add, but if they are trying to make it fit in with their db, it is kind of an unprecedented feature afaik, so I don't really expect it.

Querying on anything beside a simple key lookup is a pain in the bum. Somewhere, someone understands how indexing, pagination and multi-filtering works but I’ve given up. Don’t forget you do all of this client-side (JavaScript).

Indexing happens on the server side. It's actually extremely powerful. You just have to start with the question: What data do I want to show to the user when it's done? You make the entity ids of that data the value of an object in FB. Then you set up a listener that

1. Listens to all other relevant parts of the tree in real-time 2. Does some calculation on the values it retrieves 3. Saves a new key [ computed_value : entity_id ]

Then, you simply query for the top x keys, getting your entity ids, and in your UI you just instantiate your component to load the data for the entity in its own query real-time. It is fast enough to build nice UI's, that's why people win hackathons with it.

Firebase returns data as “snapshot”, a very weird, encoded structure that is not easily iterative. You have to navigate between forEach, snapshot.key and snapshot.val() to get what you want. I thought that’s the whole point of using JSON.

Snapshots are for things like user profiles where you actually just want to get the object at the given path (i.e. want to retrieve {"name":"mluggy7"} from /users/1).

If you want to iterate things, you should be using the `on` query: https://firebase.google.com/docs/reference/js/firebase.datab...

The way it works is that it it treats an object like an array of data, where the order of the array is the order of the keys in the object. It calls your callback if data is added to the array, moved around in the array, deleted from the array, or changed (but kept in the same position). You can use these callbacks to manage some internal array state where you're holding components that are initialized with the data items.

Write errors are a pain to debug, even with debugging enabled ends up as JavaScript “permission denied” with no way knowing. You have to run your payload against the simulator to get the firebase rule line you had tripped on.

You only have to learn this once :p

There’s no easy way to cache data, especially fragments that requires high computation. Why would a call to the same unchanged node by the same user not return from cache?

Not sure what you mean by computation, and I'm not sure why you'd need caching since the protocol just sends diffs of what has changed. The real-time db is the cache. You do your computations and save them in the real-time db. You can make it O(1) to search an object in both keys and values using indices, JS doesn't even really have that data structure.

REST interface will not use the signed-in user’s credential. You have to pass it as a token (which is just weird, especially if you planned to build a simple CRUD on top of firebase serverless REST paths)

You have to have a token, the firebase API has a token too but it manages it internally. If you use a Firebase REST SDK (of which there are many for many languages) then it will probably manage the token for you and just give you API methods to call for REST functions.

Firebase-admin is ignoring your database validation rules, who’s idea is that I have no clue but it caused me much grief.

Not sure what you mean here. You might be talking about an actual issue.

Cloud functions seems cool but I haven't used it so I can't comment.

(I've talked about Firebase on HN previously, for example [1]. I started using it at Hackathons in 2013-2015)

[1]: https://news.ycombinator.com/item?id=12117522


1. I agree Firebase is great for Hackatons. 2. Iterating multiple records is a pain and the indexing firebase provides is not very helpful. 3. By caching computation I mean getting raw data from the database processed and digested (i.e converting a costly top 10 to an html fragment you can use later). 4. Storing records with firebase-admin is like a super-user, it ignores all security AND validation rules. 5. Cloud functions are not cool.


I have clients with apps that are pushing the boundaries of Firebase. I have strongly suggested the move to Azure or Amazon "serverless" - Azure Functions or Amazon Lambda. I get strong pushback due to "cost and complexity" concerns. I respond that of course "real" development and operations is going to be more complex and expensive than the "toy" proof-of-concept they build with Firebase. But this has been an issue in my career for decades - users get hooked on on RAD tools (Rapid Application Development) and are then shocked that adding "10% more features" will add 1000% to the cost.


There is this pervasive belief in software that "everything should be easy!" which has never made any sense to me. For RAD, sure. But for anything that grows beyond that? A few core reasons why it can't (and won't) be, at least until AI is running the world:

* Any non-trivial application can explode in complexity quite quickly, even with well-designed data boundaries and code flows * The human brain is incapable of managing said complexity in any effective way * We must therefore break this complexity into consumable pieces that fit inside our heads * Which means redesigning those data boundaries and data flows, in order to make them fit in our heads, which inevitably moves the complexity around the system

RAD tools HIDE all that complexity but the moment you need to move it, you can't. So you end up having to re-implement.

I am using Firebase for an app I do not expect to ever exceed a few thousand users - it is an app for a local business that is quite popular, but limited in geographical scope. So it's a good choice.

I wouldn't pick it for larger apps whose potential growth is not so limited. I'd take the hit up front to properly build an architecture that can scale (but doesn't, at first, per Martin Fowler).


I never actually deployed this, so I don't know how it would have worked with lots of users, but when I was writing a game that required "secret sauce" my plan was to have a "server" connect as a privileged user- clients would submit their proposed moves to a write-only queue that was only readable to the "server" which would do a rules-check and update the game state as necessary.


yes, since you don't trust your users you have to get sensitive processes (like promotions, unlocks, rewards) on a separate server process or cloud function. this kinda defeats the purpose of serverless.


Most of the database concerns have been addressed in other blog posts etc. If you need relational data, don't use firebase. Period. Where you need "anything beside a simple key lookup" for the majority of your queries, go elsewhere. That's the first rule of Firebase.

The second rule of Firebase is structure your data and populate it in a way that makes it simple to fetch with a simple key lookup.


Not one mention of Horizon by RethinkDB? :D


What are you moving to as an alternative?


I'm moving to a combination of 2 things: [1] Cosmos DB for data storage, and [2] Service Fabric for the things that require real-time collaborative logic. An example of what I mean by #2 is things like real-time discussions where content pops in dynamically.

The big problem for me using a serverless architecture was constantly having to bend over backwards with the security model of firebase to accomplish things that are very simple when you have a layer of business logic under your own control on the server between the client and the database. Once the client can write whatever they want to the database, every single operation becomes an order of magnitude more difficult to develop as you imagine what ways the client could mess with you, and develop security rules to prevent that. The result-- and this was the dealbreaker for me-- is designing the backend data structure around security, instead of designing the backend data structure around the vitally important goal of supporting your business use cases.

In the end I decided it's better to just pony up and make a server. Service Fabric, and particularly the distributed actor model, gives me the realtime functions and reliability I want for my use cases without making me administer servers.

[1] https://docs.microsoft.com/en-us/azure/cosmos-db/introductio...

[2] https://azure.microsoft.com/en-us/services/service-fabric/


I'm afraid I don't yet follow your problem: what's hard about having a function listen for a change in the db, performing the business logic, and updating another value?


I need to segment the db into two sections: a section the client can update, and another section that only the protected business logic can update. It's doable, but more difficult than the traditional setup. Also when I started this project functions weren't available-- that happened in March of this year. Initially I used a machine elsewhere in my infrastructure to do this work of watching for the changes then writing to the restricted access areas.


I think AWS is the all-purpose fallback. It has some turnkey solutions for some problems of various quality, but you can always just use EC2 with custom code. S3 is a miraculous product. Lambdas are pretty cool. You can try your hand with DynamoDB, but RDS is there for you too. There's generally much less lock-in.


I've thought about Cloud Functions but Lambda may be better.


I'm not the author, but I moved back to good old REST after flirting a bit with GraphQL.


What was your decision for REST over GraphQL?


GraphQL requires you to know upfront what you want to receive. You can't for example receive data for a menu with submenus with undetermined level of depth. You can hack your way around the problem, but it's ugly.

Another problem GraphQL hasn't tackled (afaik) is polymorphism. You can say "hey give me this person" or "give me this company", but what if you want a customer that can be either a person or a company?


> Another problem GraphQL hasn't tackled (afaik) is polymorphism. You can say "hey give me this person" or "give me this company", but what if you want a customer that can be either a person or a company?

http://graphql.org/learn/schema/#interfaces

  searchPersonOrCompany(name: "abc") {
    ... on User {
     first_name
     last_name
    }
    ... on Company {
     business_name
    }
  }


As for the undetermined depths problem -- I agree to an extent.

GraphQL can't return depths without you querying for it unless one of the field is a JSON blob.. but at the same time, I like that it does that.

In your example of menu w/ submenus, I think I would prefer to load the first 2 level, then preload level 3 when level 2 is activated, and preload level 5 when level 4 is activated, and so on.

Apollo makes this quite easy.


Are you saying to request each level as needed? That's a lot of round trips, especially if levels need to be opened on mouseover or something.


Nah, just query for as many levels as you think you might need up front.


Well today I learned a couple things. Thanks for the example and ideas.


If I want to get all levels of an arbitrarily nested tree, I just query for all of the objects that are in the tree in a flat array and then reconstruct the tree on the client side using the parent/child Ids.

This is similar to what I would have to do server side if I were using SQL to get the data and then processing it to return a tree in JSON.

If I know how deep the tree will be (and it is only two or three levels) I query it directly with graphql


I've been using graphql recently and I have found it very productive.

Providing a lot of flexibility to the client when querying the server eliminates a lot of server side work that would normally be required in order to implement new user stories.

It does require a bit of a mindset change so I am often having to force myself to try doing things in a different way to my first assumption.


hi guys, OP here. updated the post with my current stack (it is everything but fancy)


> You can’t easily add claims (groups, roles, feature toggles, etc.) to user’s json web token, meaning you have to create and supplement each authentication with a call to the Real-time database.

What? I use claims in JWT with firebase and it works like a charm.


are you adding custom claims to the core firebase user or as a custom jwt/additional user object? can you have a user signed in with facebook and assign a "experiment1=b" claim to it?


JWT claims are strictly meant for JWT. I don't know how or why they would work with Facebook login, but if you need to support Facebook/Google login for users in addition to JWT, you can still implement claims/roles. You just need to create a collection for users in firebase that stores claims for all users. Protect it with security rules and then this collection to power security rules for other collections. Something like:

``` ".read": "auth != null && (root.child('users').child(auth.uid).child('claims').child('isAdmin').val() == true)", ```

This security rule will lets users read data only if their account is present in `users` collection and has `claims.isAdmin == true`. I did not test the above snippet but I have something very similar working in a project already.


my database.rules file is over 2000 LOC, and I have lots of these: root.child('users/' + auth.uid + '/roles/admin').exists(). that's my point really, you need a supplemental user record in a second database, even if all you wanted is a bit of information attached to a user login. see this guy's question: https://stackoverflow.com/questions/43329143/access-firebase...


The stack he describes at the end is pretty similar to mine (only Python instead of Node and PostgreSQL instead of MySQL); although I actually don't have a problem with firebase.


I have had so many issues with Firebase, but the one that really pisses me off the most is that they used to advertise things like "scale worldwide to millions of users" and "Unlimited Connections" but then cap you at 100k concurrent users (10k initially and will gradually bump you up to 100k). When we got in contact with them about this they were nice enough to explain that we could create another database and shard our data manually.

Fuck you Firebase, what a load of bull shit. Implying your database can scale infinitely because we can manually shard our data is like calling McDonalds an all you can eat restaurant because I can keep buying more chicken nuggets.

I've yet to meet anyone who has used Firebase on a large scale project and been happy with it. Firebase is a toy at best, IMO.

I honestly feel a bit bad for the development team at Firebase, they actually built something really cool that has a lot of potential. The problem I have is that someone took this cool project and basically lied to the public about its capabilities.


That's not the only limit btw. If any node (say "/users/") crosses 'some' number of subnodes ("/users/a", "users/b") then you cannot do any queries on the node itself. Like I cannot get even get the IDs of the subnodes ("a","b",...).

I also got similar advice to shard my users or something.

So right now, we have crossed that limit and are unable to know how many users are on our system. Their server just fails and takes down the DB for 10 or so mins if I do that query.

Firebase is good for MVPs and prototypes but not at all scalable.


> Their server just fails and takes down the DB for 10 or so mins if I do that query.

I don't know if you are aware, but their DB can't handle more than 1000 requests/sec, so if you are iterating through a list of nodes and requesting data for each one you can hit that limit (not a good practice, but sometimes you have to). Additionally, once you hit that limit, the DB slows down but keeps accepting requests meaning, if you keep hitting it, even at a slower rate you make the backlog worse. Seriously, be very very careful not to go over that limit, we found out the hard way.


Iterating would require me to first get atleast a 'shallow' list of keys for that node. But even that one REST query for shallow list of all keys crashes their server instance.

I am not even sure how I should get keys for all my user nodes anymore.

I tried doing it from an offline JSON backup they generate. But that one giant 100GB+ JSON is impossible to parse with any available tools.


Hmm. Congratulations, you've nerd-sniped me this morning!

This sounds like a classic use case for a streaming parser. The data is a mile wide and an inch deep, so at any point the memory requirements should not be too high.

What do you want to do when you've parsed it? Insert it into a real database? Iterate over it? Would simply turning it into a list of user IDs one per line suffice?


Hello. The JSON dump looks something like this:

  {
    users: {
      "userid1": {...},
      ...
    },

    ...
  }

I have tried jq stream parser to split the big dump into files like: - users.json - chatrooms.json - ... So I can then work on individual nodes.

But jq fails silently after 12-24 hours of processing. I am still researching this in free time.

If I can just get the keys (like "userid1") I can do the rest from firebase itself.


I would think something like this would work:

  cat input.json | jq -c --stream '. as $in | select(length == 2 and $in[0][0] == "users") | {}|setpath($in[0][1:]; $in[1])' > users.jsonlines
Output would me a file that looks like this:

  {"userid1":{"name":"user1"}}
  {"userid2":{"name":"user2"}}


I think I have tried something similar. I will try this one too. Thanks a lot!


The parser would have to parse the whole thing before being able to split it.

You could to the splitting yourself (it's just plain text) and create multiple files whose contents is just an array in the format:

```

[

{},

...

]

```

Then you can use JSONStream to load each of those files individually and map/reduce on the contents.


You could loop through the file counting braces, storing line numbers. Then split the file along those line numbers. The smaller files might not have the exact formatting you need to run through a parser, but you should be able to manually adjust it then, hopefully.


It's not impossible. You can do it pretty simply using https://pypi.python.org/pypi/ijson


Wow.

Running a query takes down the db?

That sounds like a major problem. How can that happen?

Maybe returning incorrect or incomplete results due to sharding... But taking the db down? That's very... Unexpected.


Never used firebase but for what it's worth you can take down most databases with a bad enough query.


In the RDBMS world, you can take down pretty much any database by giving someone in accounting a copy of Crystal Reports :)


How, specifically? Something like a very complex query joining too many tables, or maybe a full Cartesian product of n > 2 tables?


You are right. But we usually have multiple ways to query our data in a DB. Not many ways to do that with the limited Firebase API.


Nope. A clear server crash with "Internal Server Error" and the DB being totally unavailable for 10-15 mins. Apparently it's 'normal'.


> The problem I have is that someone took this cool project and basically lied to the public about its capabilities.

Oh come on, it was a startup. The same people who made the tool told the lies (if that's what they were). Now under Google, this is still the case - the original founders now lead the Firebase department if I'm not mistaken. I'm pretty sure they have influence over the marketing messages.


> Oh come on, it was a startup.

And then people are surprised when I advise them to avoid startup products.

It's the classic tragedy of the commons; lying your way into an acquihire is a win for the successful founders, and a loss for the everyone else in the ecosystem, as more and more people grow sick of being played that way.


You misinterpreted my comment on purpose. I didn't say "it was a startup, so it's ok" I said "it was a startup, so the same people who built the service did the marketing".

Also, Google's Firebase acquisition was many things, but not an acquihire.


funny how you're right. i just got an api shutdown notice from stormpath (yet another product that supposed to save us developers): https://stormpath.com/


When was that announced?

You basically have 2 weeks to implement user management functionality? An acquisition so hot they basically say fuck you to all customers. Amazing.


The announcement post[0] seems to be from March, which would give one about 4.5 months to migrate.

--

[0] - https://stormpath.com/blog/stormpaths-new-path


Wow, didn't know that. :/ Guess it is really only useful for prototypes.


I'm curious, what DB tool will you now consider using instead?


Same here. I've started a prototype using firebase, however, I'm concerned about issues like this and the fact that sometime in the future I get a shutdown notice and I will have to migrate.

I can roll my own using Rails but I'm trying to avoid that.


good old mysql. it is the cockroach of software.


> at 100k concurrent users

That is a huge number of concurrent users though.


Depends on your perspective, but it doesn't matter as it's not the unlimited they advertised.

Our app has multiple millions of users and we wanted to add a small feature which we were going to use Firebase to back. We ended up having to go back to the drawing board when we started a gradual roll out and hit that limit pretty quickly.


The risk of building on a platform that you know has that hard cap isn't worth it for many ventures though


same thing bit me. also the onDisconnect events are not guaranteed to fire, so you get orphaned stale data everywhere, and need to run manual clean up scripts.


It is definitely true that pagination could be much easier. The way it currently works however seems fine to me: You query a node with say the rate limit of the first 30 children. Then you store the key of the last result, and use it as the starting point for the next limited query. This way you can easily query a node with 1000+ children without overloading the client.


In the server-less section you stated:

> "You are forced to do everything client-side"... "including your SECRET SAUCE"

How I see it is the "static content" (which I assume is a SPA) shouldn't contain anything more than the conditional logic to decide when something should be rendered or retrieved from/ sent to the server.

If the "SECRET SAUCE" you are referring to is the conditional logic, then you shouldn't worry as this can already be derived purely from interaction with the UI.


It sounds like the company is growing up. Firebase is great for simple use cases but has too many limitations for most cases.


Ah the tech cycle of short-sightedness going from an advantage to a disadvantage.


So far, I have no complain using Firebase Cloud Messaging for push notification.


How do GraphQL subscriptions compare for providing realtime functionality?


Remember that GraphQL is only the query language and the realtime server implementation is left to each user.

That said, the idea is similar. You subscribe and unsubscribe to data. Some callback is called when data is updated.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: