1) It's as declarative of an approach as it can be without sacrificing flexibility.
2) It follows the structure of your data and is itself a datastructure that can be trivially manipulated.
3) It centralizes a concern. Security measures often end up spread all over the place, making it very difficult to reason about what is or is not correctly handled. Not so with this model.
I'm excited to see what people do with Firebase now that the one thing that was really holding it back has a solution. I know from my standpoint, it's certainly nice to have someone else dealing with the nightmares of real-time scaling and security so that I don't have to.
I highly recommend it if you're looking at creating a real-time app!
Also, one side note: it's not always clear from reading the docs, but changes to the security rules can be made dynamically and take immediate effect.
I think this is awesome and I am glad you have it working. I'll be seriously considering Firebase for future projects.
You don't in particular want the "security API" of an app framework to get externally validated. For the most part, the "security" components of app frameworks are boring, modulo some really simple- to- answer questions ("how do you store passwords", "how do you handle password resets").
When people talk about "security frameworks", they tend to be thinking about what we used to call "AAA systems" (authentication/"authn", authorization/"authz", and accountability/logging). You don't need to be an appsec practitioner to look at an AAA system and see whether it's any good. Does it apply authn by default? How easy would it be to accidentally expose a method that didn't require a login? Does it apply authz rules by default? How much of a pain would it be to reliably apply authz rules across my project? If stuff is easy and has sane defaults, you'll use them and benefit --- like you do in Rails, if you use association finders off a "current user" instead of the class object finders.
At Matasano, we have a "seven deadly features" in web apps that we tend to find the first and worst flaws in web apps in; they're:
1. Password reset
2. Email functionality
3. Interfaces to thick clients; "REST API" &c
4. File upload
5. File download
7. "Advanced search"
Only one of those is a security feature, and it's also a security feature that most "security APIs" don't actually implement (because every app wants to have a different UX for it).
Lest this sound self-interested: most frameworks haven't been formally externally tested. I'd be thrilled if someone had their framework tested formally, but I don't tell people not to use Rails or Django or Spring just because they haven't been pentested by an appsec firm.
What are the most common dumb things people do with a password reset feature?
Ooh, I think I can actually do this: You send the address on file an email which contains a random, time-limited, one-use token which allows them to log into their account. You should not disclose presence or absence of the account via the reset mechanism on the site (only via email), but that's a comparatively minor issues. A successful password reset should end all in-progress sessions for the account. You should also do all the table-stakes web development things like e.g. not allowing SQL injection or XSS on the page.
There's plenty of failure conditions for password resets:
1) You create non-random reset tokens, such as MD5(user.id) or MD5(user.email), potentially compromising security of accounts.
2) You allow resets of your teams' admin accounts to happen through the publicly accessible password request feature. (n.b. Almost never a good idea, for one because it means you're probably doing shared authn/authe between your app and your admin console and that is generally a very bad idea. I say this in spite of shipping it in every product I write.)
3) You allow password reset tokens to get re-used, which increases the attack surface against individual accounts by e.g. making saved-to-disk emails into attack vectors for them. (Standard reset mechanisms only make emails-in-flight into attack vectors.)
Can you please elaborate more, or link to articles about why this is a bad idea? What are the common failure scenarios that splitting normal & admin users fixes?
Separating the admin app and the normal app upper-bounds the impact of those flaws at something lower than "The worst thing someone could possibly do with the admin console." Given that many admin consoles have fairly extensive functionality, that's generally a good idea. (I mean, my businesses are pretty low risk as things go, and my admin consoles would trivially do about $20k of damage just with refunds if compromised.)
Needless to say, you could arbitrarily set any account's password to whatever you wanted.
Account harvesting through the password reset functionality is one that pops up all the time. A lot of businesses now just accept the risk (which I no longer really get all that worked up over, at least provided they aren't also enforcing account lockout for invalid attempts)
But it /is/ server-side validation, that you as a developer get to specify.
(That said, we're definitely happy to get feedback on our approach from any security experts out there that want to take a look!)
The Scratchpad.io tool that has been on HN for the last couple of days has really blown me away with what Firebase is capable of. Keep up the good work guys!
As an aside, the model you guys ended up with is fairly similar to the one I proposed back in April: http://news.ycombinator.com/item?id=3864430
I will point out though that what you described looks like an ACL system with groups (akin to unix permissions), and we think our Security Rules system is actually quite a bit more powerful than that (you could easily build ACL / groups on top of it, but you can build lots of other models too). :-)
So you could store a 'public' flag on every picture in your Firebase data, and then write a single rule that checks the value of that flag, and if it's true, allows anonymous access (and if it's not true, checks that the accessing user is in the picture owner's friends/family list).
Assuming your rules let the picture owner edit the 'public' flag, they'll be able to flip it on and off, granting or revoking access to non-friends/family. And since everything in Firebase is real-time, the change will instantly take effect.
I was critical of the firebase security situation when they launched, but this looks like a good solution indeed.
Well done James T. and the rest of the Firebase team.
Not all apps need this stuff right away, of course, and some never do. But it's hard to do these things w/o a secure server environment for privileged code.
And then of course there's the usual tradeoffs between having control over your hosting environment (a la Meteor) and outsourced backends as a service (a la Firebase).
We're big fans of Firebase, BTW. We think more databases will move to a realtime model. Excited to check this out.
We're big fans of Meteor as well. Firebase is just a data layer -- we do high-performance, scalable real-time data sync -- but we aren't a complete Framework for building an app. Generally devs will want to use something like Meteor (or Ember, or Backbone, or Spine, etc) to help them organize their code and manipulate the DOM.
In the future we hope to have a Meteor integration, so you can power your Meteor app without running any of your own servers (and do it on a super-low-latency, highly scalable backend).
The model is reversed in this case though, where instead of clients talking to your application server, clients talk to your database (Firebase), and your application server only ever talk to your database (Firebase). It's a bit of a conceptual shift, but we have many customers doing this in production and it works quite well.
Our rules are expression-based, so they have fast and predictable execution times, meaning we can run these on our own servers and scale your app for you with guaranteed performance.
Our rules also let you specify all of your security logic in one place, rather than sprinkling it around your code, meaning it's easier to review and reason about (and avoid holes).
Having rule expressions attached directly to data locations also ends up being much more concise (which means less boilerplate code and less opportunities for mistakes), which we think is important for a security API.
Since the whole rules language is custom, we're not using any off-the-shelf projects presently, and to be honest, we've only scratched the surface of what's possible.
But given the nature of the rules we can do a lot of cooler analysis, like detect redundant or conflicting rules. We should also be able to do interesting analysis for the developer like answer questions like "What parts of my Firebase data can an unauthenticated client read or write?", "What parts can a user authenticated as 'joe' read or write?"
This sort of analysis should help developers reason about their rules and verify their correctness. Honestly, I'm pretty excited about the possibilities. :-)
We're real-time, all the time.
We're also focused on the data layer of your app, and on the web, rather than being a mobile-focused company.