Isn't one of the main benefits of something like Firebase or Parse that you don't have to run it? It's nice that Kinto packages the stack together and provides plug-and-play APIs, but there's quite a bit of expertise and overhead operating a backend stack in production.
You can use Parse an Firebase to store sensitive data by applying client side encryption. Of course one of the biggest problem is: how do you query that data since it's encrypted. But lately, there are some start-ups that aim to make encryption searchable. See ZeroDB, CryptDB, IQrypt.- the last one is designed to be used for Parse and Firebase: http://iqrypt.com/docs/Home/
We're actually building an open-source alternative to services like Firebase and Parse. Ours is a fully peer-to-peer, real-time, graph database that doesn't require a lot of expertise or overhead, and is extensible.
Regardless of this (assuming this is even true), I can't imagine that a finance company would want to store their data inside something like Firebase - The risk of data theft is too high and the value of the data is also too high.
When you store everything in a big centralized system, the risk of that data being compromised increases greatly. Right now, the reason why no one is hacking Firebase is because the data which is being stored in there is low-value.
If banks and hospitals started storing data in Firebase, you can be sure that it would attract the attention of hackers and you can be sure that they would find an exploit eventually.
You can't possibly place all of the world's high-value data inside one or two systems. Every single change made to the codebase is a potential security vulnerability.
Also, I imagine that employees of Firebase have access to all your data - What if one of them decided to share your data with a competitor. Humans are corruptable - This is too much power to put in the hands of so few people.
Yes, some types of data are highly valuable because it can be exploited by someone to make a lot of money.
Data related to a person's education/intelligence is valuable too.
If you could get a list of all emails of people in the world with IQ < 70, you could easily take advantage of those people by sending them scam emails (for example).
Also, someone's preference for particular adult content is highly valuable (for blackmail).
A person's location data might also be quite valuable (especially if that person is a politician/celebrity).
Media probably would pay a lot for health info on celebrities, politicians, ...
Scammers could use the info to find targets (Fake hospital bills, "new medications", ... are easier to sell with correct information. Financial scams might work better on families that are desperate to pay expensive bills)
Publicly shaming or attacking people with "bad" or "disgraceful" health issues. (HIV, gender operations, mental issues...)
There are other forms of value besides money; in terms of social value and reputation, a hack that steals a large quantity of data about people is worth a great deal to the hacker.
What do you mean by HIPAA certified? Do you mean they'll sign a BAA? I don't think they will.
...and even if they did, the technical requirements of HIPAA compliance go much further than what Parse has to offer. You'd be much better suited building an application and hosting it on Catalyze[1], which covers every aspect of HIPAA compliance and has a HITRUST certification.
My thoughts exactly. Kinto is a database with an API that you have to configure, deploy, manage and scale. It's not really an alternative to Parse/Firebase, as it is an alternative to Mongo.
Full disclosure: former Parse Push tech lead, current Firebase engineer.
If your slowdown is only while the device is on mobile, it's almost certainly due to something outside of the Parse stack. Push notifications have grown to be an async information pipe, but never a real-time or reliable one. The most common example of this is APNs (Apple's push network)--Apple only buffers the most recent undelivered notification for an (app, device) pair and messages without UI (aka "silent push") must be sent at a low priority which will likely incur extra delays.
If your goal for silent push is to help improve cache hits by pre-populating an app with useful data, don't worry about the slowdown. If you're using push for something that must be reliable or real time, you're probably using the wrong tool. Consider something like Firebase for this instead. If you're already deeply invested in Parse, I've seen people use Parse Cloud Code to replicate writes against Firebase and use Firebase to build a client sync layer.
I'm not sure why no one mentions rethinkdb. We use it and are extremely happy with both its API mindframe, realtime updates and how easy it is to scale it.
I believe, really, its more about being indie and such. Its the same reason many people can just run gmail - or google apps for your domain - as opposed to the heavy-lifting of running your own email server, etc. Not like I'm an expert, but running one's email - and ensuring high degree of deliverability nowadays - is quite difficult. There are those who choose to run something on their own, maybe to learn, maybe for privacy's sake, etc. I'm sure those who do so, dive in with their eyes wide open. ;-)
Why not CouchDB + PouchDB? It's offline first and replicates over HTTP.
For more fine-grained permissions and other features I'm also developing a small Go database compatible with PouchDB at https://github.com/fiatjaf/summadb
I hope it will be able to sync through websockets soon.
I find it an ongoing mystery how many of these tools and services pop up that aren't based on Couch/PouchDB, don't reference Couch/PouchDB, aren't compatible with the Couch sync protocol, and don't even seem aware the Couch/PouchDB exist.
I'm 100% waiting to be sold on "here's a new tool that does what CouchDB does, but is better in these specific ways", but if your sales pitch is just "we're working on writing an exact clone of CouchDB because we didn't realise CouchDB existed" it doesn't really sell me.
(Nothing wrong with a dose of Not Invented Here Syndrome; a lot of progress comes from reinventing the wheel. What gets me is the No Idea What's Been Invented Syndrome.)
I am a little confused at why PouchDB isnt listed on the comparisons page, I am fairly certain they sent a comparison listing and asked me to review it at some point but in all fairness they did investigate and give reasons to not use PouchDB, this post is linked http://www.servicedenuages.fr/en/generic-storage-ecosystem
In my extremely biased position (as the maintainer of PouchDB) I would really have liked to still see this based on the CouchDB protocol / PouchDB codebase, it is true that neither Couch or PouchDB have particularly expressive and powerful permission systems (which is what the main downside looks to be) however I have been very aware of that building PouchDB and have made a lot of effort to ensure a powerful and flexible permissions system can be built on top of it (for example https://github.com/thaliproject/node_acl_pouchdb)
I do think that when you are tasked with providing cloud services for existing Mozilla products that there is a lot more risk involved in leveraging an existing established ecosystem like Couch / PouchDB however so its fairly likely that for Mozilla services this was the best choice (the same in reverse is true however also when looking at this as a generic library).
As for a disclaimer, I am a maintainer of PouchDB and a Mozilla developer. I have worked / talked to this team about Kinto, work on some features that use it and they are doing awesome stuff, some of which Pouch may possibly look to do as well.
Yeah our doc is incomplete, I did add a bug about this the other day. We should compare products (pouchdb vs kinto.js) and separately deployments (kinto server vs couchdb)
As a Firebase customer and former CouchDB user, I can tell you that the use cases are really not as similar as you apparently think they are.
Until very recently, Couch/Pouch would only sync entire databases, and recommended a "One database per user" approach. This is improving now, but it's still very coarse-grained. Last time I checked, anything that has data that you couldn't shard easily is basically unfit for Couch/Pouch - e.g. any messaging app or a Twitter clone. How would you select which data goes to which user? Or do you just want to put all tweets, globally, on everybody's phone?
- strong access controls (don't want clients to see data they're not allowed to!)
- the ability to seamlessly replicate a view from the server to a web client without me having to think about it (although only read-only)
So far none of the web nosql databases I've looked at support these. Strong consistency seems to be unfashionable, presumably because it's hard to shard. I was really surprised to discover that CouchDB apparently doesn't support access controls, though --- if a client has access to the database endpoint, it can see everything. Aren't access controls part of the core competency of a multi-user database?
The Couchbase Mobile Sync Gateway tackles fine grained access control head on. Basically you write a function that stripes your data into channels, and manages access on a per channel basis.
Sync Gateway looks very plausible... but having to run an extra server just as access control to the main database? That's so ugh.
It also looks like this requires me to annotate each document with ACLs. I was rather hoping to be able to just sync a view, so that changing the database would cause players with changed views to automatically resync. I'm not terribly happy with having to recalculate the views of all players and then update the ACLs of all documents manually on every database mutation; that's a lot of writes.
The database speaks a binary database protocol. The Gateway speaks websocket and http.
Access control is managed via channels, and you configure the gateway with a JS function that determines which channels a document belongs to, so you don't ACL annotations, the ACL is determined dynamically by your code at write time.
There's a tiny bit of write amplification as we persist channel membership in an index, but that's to avoid massive read amplification of irrelevant records when clients sync their subset of channels.
> I was really surprised to discover that CouchDB apparently doesn't support access controls, though --- if a client has access to the database endpoint, it can see everything.
Not sure what you mean; CouchDB has strong access controls, although they are at what in SQL terms we'd call the table level, rather than the row level. But it's not really any different than, eg, MySQL? Or MongoDB, if you want to compare NoSQL to NoSQL.
If you have data in a table that a client shouldn't see, don't give that client access to that table.
You might be interested in https://github.com/couchbase/sync_gateway which is the Couchbase Mobile access control and sync layer (compatible with PouchDB etc). It has fine grained access control based on a channel metaphor. And it's also written in Go, so you might be interested in using parts of it.
Specifically if you wrap your storage layer in a facade that looks like https://github.com/couchbaselabs/walrus you'll be able to use our suite of storage layers, and we'll be able to use yours. That means it'd be trivial to swap out your storage for something scalable like Couchbase Server, or something tiny and embedded like ForestDB or RocksDB.
As far as I could understand, basically I must implement the Bucket interface[1]? I don't if that will work, since I'm storing things in a somewhat different manner, but it's definitely worth examining.
Speaking of finer-grained permissions in CouchDB/PouchDB. I'm working on implementing that now in my application. Can filtered replication (server-side) solve this problem?
For an application targeting small-businesses, I'm considering the following setup on the server. Has anyone done this or something similar?
One DB per user and a Company DB. Filtered replication from Company DB to User DBs. Full replication from User DBs to Company DB. All reads/writes/deletes from client (or PouchDB sync) to User DB only
Heh, so last week my buddy Tarek Ziade (one of the developers of this) and I were talking about this (kinto) project he's working on. I asked him what the meaning behind the name was. He said "you know the cloud from Dragon Ball. They don't use it as much in DBZ since they can fly."
I asked him if he meant "flying Nimbus?" Apparently, something was lost in translation (I'm in the US, he's in France).
Would have to throw in Couchbase Mobile for comparisons too. It is Open Sourced as well and it provides the offline component with abstraction to sync-ing which is one of the most difficult features/problems it solves.
From the Overview page: At Mozilla, Kinto is used in Firefox and Firefox OS for global synchronization of settings and assets, as well as a first-class solution for personal data in browser extensions and Web apps.
So I read through the docs for a few minutes and something wasn't entirely clear.. It shows "offline-first design" so does that mean I can use it as a local storage in the browser/device, and it can sync later to a server?
Great! We run Firebase in production. Offline-first is one of the most important features for us. (I know, Firebase isn't there yet) I'm surprised this isn't showcased more prominently on their marketing.
Does anybody know about databases that talk WebSockets rather than HTTP? For frontend apps it seems more logical to use WebSockets. Dealing with AJAX and understanding how to manage all of that on the client is still a huge dilemma in the React/Flux/Angular community.
Hi,
at zetapush (http://zetapush.com), we provide a realtime BaaS with database, file storage, search engine, ... everything with a websocket connection.
You can use our javascript SDK to connect to our clusters.
I'm a big fan of services like Parse & Firebase so it's cool to see something open source and independent get released.
I've never made the jump to use Parse or Firebase in production though because inevitably I have to write some custom server code to do some integration. For example - payments with Stripe. Charges should only occur on your server (not on the client) but I've yet to see anyone allow you to configure a payment trigger in one of these services.
Basically, you need a service running on a separate server to implement these sorts of integrations. If Firebase is a long-term part of your stack, I agree that sounds unappealing, b/c the reason you're paying Firebase is so you don't need to wrangle a server.
You may think differently if you're using Firebase as the fastest way to get an MVP up and running (that happens to be my use case)
Of course you can use firebase also in combination with server side code. It is no reason not to use firebase because you use server side code.
I use firebase and SQL at the same time. It works quite good if you know what you do.
IndexedDB is not limited to 5MB, its essentially unlimited in Chrome and Firefox. It can be tricky with mobile browsers but they often also have the option of using an unlimited web app wrapper (http://www.html5rocks.com/en/tutorials/offline/quota-researc...)
And as for WebSockets, HTTP is easier to debug and experiment with, in my experience the performance difference for the typical use cases is negligible and when needed it is trivial to wrap the communication in a socket.
WebSockets are for fast, low-overhead connections. If you end up making a new HTTP connection for every update, and there are a lot of updates happening, you should be using WebSockets instead.
Otherwise, HTTP works best. It has better tooling and will soon be even faster due to HTTP/2.
I'm surprised no one mentions xhr streaming, if your pipe is only one way i.e server notifies client, then xhr streaming is quite easy to use and as fast as websockets.
It really is a challenge. I don't think it'll be realistic to build mid-size, fully offline web apps (excluding tools like electron) until indexedDB has wider support. One solution is to garbage collect the oldest data from localStorage and sync with a server when you've got connection, but once you're offline you're kinda stuck with 5mb. However, caniuse.com/#feat=indexeddb is optimistic about the future.
The company I work for is trying to make offline/decentralized apps more realistic (gundb.io, github.com/amark/gun) and storage limitations can be aggravating at times.
Its realistic now, most people do not want to be using Indexeddb directly and pretty much all IndexedDB wrappers will fallback to webSQL. That gets support back to ie9 which is unsupported by most of the web.
This isn't really a complete replacement for Parse or Firebase as far as I can see. Apache Usergrid and Baasbox both provide more of the features that Firebase and Parse provide.