Unlike a lot of comments here, I don't need to be sold on the concept. As luck would have it, I am actively building an app that runs mostly in the browser using CouchDB, Kanso, and PouchDB. It's a pretty amazing tech stack; I get a local DB in the browser, which I can sync with the main DB in the cloud the client is connected. Plus I get some nice APIs for stuff like sessions, registering users, security, validation, etc.
I see you've already mentioned switching to PouchDB on your backend, and you're already using CouchDB for the cloud, so it seems like you're basically competing straight up against Kanso. So.... How is what you're doing different from what Kanso does? Why should I use Hoodie over Kanso?
users.create('testuser', 'testing', {roles: ['example']}, function (err){});
session.login('testuser', 'testing', function (err, response){}});
Seems like six of one, half a dozen of the other. What's your killer feature? Kanso has shaky documentation and is currently more aimed at couchapps (they strongly assume that you'll serve your frontend code from CouchDB), but I'm not sure how that really differs from Hoodie. :)
If we take the overall architecture as a given (browser based app, PouchDB or equivalent for local storage, syncing with CouchDB in the cloud, and node workers handling the backend work), what's the focus of Hoodie that sets it apart from Kanso?
(In any case, nice stuff. I think the concept is great, and there's definitely room for several competing frameworks in this space.)
Spot on! We stole a lot of ideas from the whole CouchDB application space. @caolan, the Kan.so dev is currently looking into Hoodie as well.
As far as originality, we choose a different design approach. We want to cater to people who barely get jQuery and give them a tool to build larger apps. We are building the technology downwards from that idea towards the bits and pieces we have.
One of the fun things about Hoodie is that there is nothing technically new in it. Existing ideas and tools are just put together a different way.
I am trying very hard not to be dumb, but this is beating me
seemingly Kanso? and Hoodie both have JS stubs at the client and server end, these synch with each other, then synch with whatever you are using at client (say backbone) and server (node? CouchDB? My Postgres server behind my python app?)
If thats roughly right, why do I do this? I control my server apps no? Why wrap them in another layer? Json->backbone is hardly a difficult synch operation - I mean its called synch().
I really am interested in understanding the philosophy and driving ideas here - everyone has a moment of clarity where they can see how to build an app or a framework, and how it will come together - I am not attacking you (yet !) - please describe your moment of clarity.
"Moment of clarity" is a great cue. To me, Hoodie is enormously empowering (which is why I'm part of the Hoodie team). As a self-taught, non comp-sci frontend dev with a humanities degree I can get all the backend/DB stuff sorted, definitely, but I'd actually rather not. It's difficult and occasionally arcane. I don't enjoy it. And, let's face it, I can make terrible mistakes. With Hoodie, I can quickly build apps by myself, on a solid, proven base, with super-fast setup, easy deployment, offline capability and syncing. And these last two don't even need to be actively invoked, they're just there and they just work. That's practically magical from where I stand.
Hoodie allows me to build things in days that would usually take me weeks and in some cases require additional people. This gives me more time to experiment, refine, user test etc. It also lets me delegate with ease: I don't want or need full control and responsibility over the server and the db and many security aspects, not for what I do. I'll happily pay nodejitsu and iriscouch for that, because they know more about this stuff than I could ever learn in the time it takes me to make the money to pay them.
That was my moment of clarity: Hoodie lets me get on with what I want to do and what I'm good at, and it's really got my back on many of the things I have neither the skill nor the patience nor the money for.
Okay, first off, let's discuss a standard three tier webapp architecture:
Browser <---> Server <---> DB
You get some HTML/CSS/JS from the server, you display stuff to the user. As the user interacts with your app, HTTP requests hit your server which processes them. As needed, the server will make requests to the DB, which will reply with data, which then gets passed back to the browser. Everyone knows how this stack works, I hope. :)
It's simple and it justs works. Almost every website you use works this way. But for webapps, it's not always ideal. Let's discuss some of the issues:
First off, this architecture is "always online". If the browser loses connection to the server, then when your user clicks on a button nothing will happen. Not too bad in a desktop environment, but with laptops or - especially - mobile clients this isn't ideal.
Worse, what if you're trying to keep the state in sync with multiple clients (like Trello does)? You're probably using websockets or long polling or similar techniques so that the server can tell clients about the actions of other clients. A network hiccup might cause you to miss one of these updates; detecting and recovering from this sort of sync error is problematic. Generally the solution is to throw away the last few bits of work the user did, and force a full reload and re-fetch the current state. Which is fine, in most cases. Unless the work the user is difficult to replicate, or if the application state is quite large. Trello takes a second or two to grab the current cards; if your app takes 30s to grab its data, you may want to avoid this though. :)
In fact, the more you want to replicate the feel and functionality of a classic desktop app, while allowing multiple users and clients, the more you start to struggle against this architecture.
What other architectures are there? Well, there's one obvious one:
Browser <---> CouchDB <---> Workers
CouchDB is kind of unique among databases because it speaks HTTP natively. It can serve your app to the client, and then turn around and have the clients connect directly to it. It can't, eg, send emails, but what you can do is use CouchDB as a queue. Add a document that says "send an email to X", then you spin up some worker processes that monitor it and process any documents like that as they appear.
CouchDB also has a _changes feed that makes a lot of sync tasks very easy. It also does master/master replication, and is generally a really nice database to work with from a devops point of view. Still, we're missing a crucial bit of magic to really improve over a classic three tier webapp. And that is...
...local DBs, in the form of PouchDB for browsers or TouchDB (apparently called Couchbase Mobile these days, although confusingly it's unrelated to Couchbase, which in turn is unrelated to CouchDB. Don't ask.) for Android/iOS. CouchDB is really really good at syncing and replicating, so now we have a new architecture:
This isn't appropriate for every task. For one thing, it's quite a lot more work to implement (partly due to inferior tooling which projects like Hoodie are, thankfully, addressing). But if you really want the best possible user experience, you want it to work even if your on an airplane or in a tunnel, you want a really snappy UI, you want each client to see the changes made by other clients in as close to real time as practical, you want a consistent experience for native mobile apps and webapps running in a browser...then this is the architecture for you.
So to directly answer your questions:
First, Kanso (by default) wants to do something like:
Browser <--> CouchDB
Which is a really slick, really easy way to do single page applications that don't need local persistence or server processes. But it's not hard to add them on, in which case you have the architecture above. You seem to be imagining something like this:
Hoodie Client <--> Hoodie Server <--> CouchDB
But that's not the idea at all. You're also asking why not do this:
Backbone <---> Python <---> Postgres
And that's a perfectly fine architecture. It works, it's rock solid, and it's easy to code. For many apps (maybe most) it's the best possible architecture (well, I'd suggest Knockout over Backbone, but that's personal preference).
But for the project I'm working on right now, we decided that wasn't quite good enough. :)
Great answer. Tiny nitpick: the next version of TouchDB is called Couchbase Lite. We call it that b/c it is made by Couchbase and is lighter than TouchDB.
Couchbase for Mobile is what we call the umbrella (including our Sync Gateway which adds multi-user access control with more scale and flexibility than CouchDB, but compatible with the same sync protocol).
So from my little perspective, couchdb does replication over network, you are leveraging that to create apps that can queue network events locally and get on with other work, and that one definition of a record can easily be used in JS, local store and remote.
Seems nice. I remember another thread recently with a discussion on replacing the standby icon. At some point someone drew up a series of dynamic I ins showing network state - fully synched, uploading, fetching etc.
Your project is addressing a new and previously unknown need for users to have a mental model of - remote synch.
Always be working in growth areas :-)
My homework (along with everything else !)
1. How does couchdb replicate and what are its failure points (nothing is magic but you seem to like it - I will have to have a look. I assume erlang is not running in IOS - I will read up
Thank you for taking the time to cudgel new knowledge into an old brain
CouchDB's replication is nifty because, in a sense, there is no replication. Or rather, not in the sense that MySQL or MongoDB might talk about replication as a separate process or separate protocol.
I spent a while typing of descriptions of how it works, but honestly I think you're best served by these two answers:
It's not magic, no. It does, however, work a lot like git (a good thing!). The biggest downside is that it pushes some of the work to the client; you really do have to figure out how to handle conflicts in some cases. For example, if you edit the same field on the same doc in different ways on two different clients and then try to sync, there is no automatic way to resolve that other than throwing one edit away (which you can totally do if that works for your problem space). Luckily, that's not a very common situation. :) And, again, how else are you going to solve something like that? It all comes back to the CAP theorem - you can have at most two of consistency, availability, and partition tolerance. Rich mobile apps can work around a lack of consistency if they must, but a lack of availability and partition tolerance undermines the "mobile" part; for some projects (like mine) that is unacceptable.
As for TouchDB, if memory serves it actually was running an erlang VM at one point, but has long since been rewritten into native Objective C (for the iOS version) and Java (for the Android version). The PouchDB version is written purely in Javascript. Again, all that matters in the end is that it speaks the protocol. As @caolan put it, CouchDB is not a database:
If my phone has no signal I'd rather see an error message. I don't want the app to pretend everything is fine when that's not the case. If I press "Delete" in the app, don't pretend the thing is deleted if it's not really deleted yet, that's deceptive. What if I take off on my boat and I press "Send" and your app says "Sent!" but it's really not sent because now I've left the coast--your app just lied to me.
If I'm writing an email in the Gmail app on Android, what I want when I hit send is for the app to save the email locally, and then do its best to try and deliver it. Ditto for, basically, every other app on the phone that talks to a remote server. If I try and send a tweet, I want it to go "sure", and then be able to go on about my business while it works in the background. (And, again, that's how the actual twitter app works.)
And if I'm using some app that lets me take notes about a meeting or interview and syncs them with the cloud, I sure as HELL want to be able to take those notes and save them locally, even if the cloud sync had to be delayed. "Sorry Mr. CEO, but I can't seem to get wifi in this meeting room. Can we go out on the balcony and you just run back over those last few points?"
Now, yeah, I need some way of checking to see if it has actually sent the email or whatever. And if it fails, a notification that it has done so is not amiss. Mobile apps, typically, already do this sort of thing. (For gmail for android, you can check the outbox to look for unsent emails, and I believe twitter for android will notify you if it fails to send a tweet.)
To the extent that you're just saying that the app needs to have a UI which doesn't lie or surprise the user, then I agree wholeheartedly. If the user cares whether or not the cloud has been updated, or the message has actually been sent, don't lie and claim it has when it hasn't!
But if you're saying that as a general rule we need to not handle network sync stuff in the background, I think that's nuts. Nobody wants to sit there with an email draft open hitting send every so often to see if the 3G signal is good enough to talk to the mail server yet. Nor do we want to save a bunch of emails as drafts when offline, then have to go open each one when we get net access again and send them. (Come to that, in your model, could we even save an email as a draft in gmail without net access? Remember, the drafts folder is synced. Do we want the app to claim that a draft has been saved when it's not accessible from your desktop gmail client? Isn't that another case of "pretending we've saved a draft when it's not really saved"? If not, why not?)
TL;DR: Good UI that doesn't lie to the user is important. That's not a good reason to cripple your apps functionality so it doesn't do anything you can lie about.
You think I'm wrong but you wholeheartedly agree? Funny. Every situation is unique but bottom line: you can't receive content without a connection. Most apps NEED a connection to function. So an error message "No Connection, Retry?" is both common and expected behavior.
Well the problem here is not what the app did - it's what it said. If offline it should say "Offline - queued for deletion" or "Offline - queued for sending". When you have offline mode you have to think differently and design your UX appropriately. You can't just pretend you're online and play fast and loose with how you inform the user of state.
Sorry but your example is a red herring.
Hope this clarifies.
Dropbox and iCloud have the same issue. They seem to be accepted by users. It is also possible to get status information telling you if you are fully synced or not.
another way to think about this (from the old days) is that disconnected mode data is a given because the rate of growth of data is far greater than the rate of growth of bandwidth.
So everyone can't always be accessing all their data in the cloud through an always on infinite bandwidth connection.
So you're going to need to carry a subset of your data with you - the subset that's "hot".
Now having said that keeping hot data in sync with the larger data store is a hard problem - p2p or not.
And what these architectures (CouchDB, PouchDB, ...) do is take the sync problem and make it an infrastructure issue so it only has to be solved once by the infrastructure creator (CouchDB team) rather than again and again and again by each app developer who wakes up in the middle of the night and realizes they have to solve "sync" in their app as an application problem. Then they have a nightmare and when they wake up they are babbling.
The additional anti-pattern-badness with sync-in-the-app is that that kind of sync is usually incompatible with another app's bespoke sync. But when sync is in the infrastructure then a much larger group of people who use that infrastructure can share data across apps if they want to without having necessarily to share a schema.
This is the real power of making sync part of the underlying computing fabric.
Finally when this infrastructure is open source the real -danger of lock-in such as when you use Dropbox is mitigated, should you want to have that freedom.
peer-to-peer synch? So this is not client server? But browser to browser? WebRTC-enabled? Much cooler. And not even eventually consistent. Please point me at theories :-)
jQuery / backbone dev can be productive with
I am not clear on that - is that "someone who does not / cannot develop on a tradiational server backend (LAMP etc). How does Hoodie help them? How do you handle network downtime or two clients changing one backend resource with downtime?
Why? - that was more what are the benefits to me as a LAMP / backbone dev to using Hoodie, as opposed to are you enjoying yourself, which seems evident :-)
this is many-client-server. p2p sync enables you opening your Hoodie app on your browser, smartphone and tablet at the same time and make it all work, even when some or all of the devices are offline for short or long periods of time :)
> I am not clear on that - is that "someone who does not / cannot develop on a tradiational server backend (LAMP etc). How does Hoodie help them
$ hoodie new app
$ cd app
$ jitsu create
$ jitsu push
(simplified)
> How do you handle network downtime?
Waiting.
> or two clients changing one backend resource with downtime?
p2p sync with conflict detection as provided by CouchDB.
> that was more what are the benefits to me as a LAMP / backbone dev to using Hoodie, as opposed to are you enjoying yourself, which seems evident :-)
While you are capable of doing all the heavy lifting, why not focus on building great apps? Or if your thing is running backends for people who build the apps, why not give them Hoodie and take the day off? :)
That isn't what "peer to peer" means, though. Peer to peer means that peers (the aforementioned browser, smartphone and tablet) communicate directly with each other.
Also, how does CouchDB keep things secure if clients can sync apparently any data? I'm assuming there's something there, but nobody else has explained this.
Second, for CouchDB replication is just another client connecting to a DB; it's handled the same way anything else is.
As a result, we have the notion of filtered replication. The server-side CouchDB won't tell anyone secrets they don't need to know - be they clients or client-side DBs. At at the other end, the client-side CouchDB has some validation to stop "bad" data going into the database by accident, and then the server-side CouchDB has the same validation again. People can compromise their client DB all they want, but eventually it all ends up on the wire as plain old HTTP requests interacting with a plain old REST API.
In many ways - as far as security goes - the client DB is a red herring. The server ONLY speaks a well-defined REST API, and has validation and security to deal with malformed or malicious API requests. The fact that those requests are generated by a client-side DB based on data entered into it via JS commands is neither here nor there; a Backbone app would generate the exact same requests based on more-or-less the same JS commands. If you can secure any REST API, you can secure CouchDB.
Key difference: kan.so is Couch specific. Hoodie is not. We do not try to make another nice library to wrap CouchDB. We try to make a nice JavaScript API for the most common backend tasks.
Forgive my ignorance, but CouchDB does not run in the browser, right? So am I correct that the "mostly in the browser" statement doesn't apply to that particular data store?
Looks lovely, but I wish people would focus less on fast development and more on something that better maintainability 2 years down the road.
When it comes to the lifecycle of most applications, the speed of developing something new has a marginal impact on the overall costs, and most tools and frameworks than enable rapid initial development tend to suck once you have a big app on top of them. This is why we fawn over a tool when it's shiny and new, and bitch about how much it sucks 3 to 5 years later.
> [dev speed has] a large impact on the overall probability of getting off the ground
I don't know. This seems common sense but it could be a cliché too.
If we look at other domains, it is not always true. For instance, if you write novels that are not tied to some evanescent trend like a pop star's private life, then the chance of getting off the ground if not directly linked to the timing of publication. It would be the same if you are creating a newly designed chair.
Even on the web, let's take two success stories: when Wikipedia or Twitter were created and made public, I don't remember seeing five other contenders running behind them, and being only two-features-months behind. Same with, say, Minecraft.
I am all for quick development, because it is more fun, but not because it would magically increase to probability of success, which is too low to compute and unpredictable anyway.
I am also all for choosing carefully which path you take, especially when you cross virgin territories. You don't want to find yourself spending a night half naked with a toothpick for your sole defense in Borneo's jungle.
Let's say today 1 of 1000 people is able to create an app to solve a problem, let's say time time tracking. Now imagine we change that drastically, to 100 of 1000 or even more. People will start to build their own little apps to solve their own little problems. And if these break, nobody cares, but them.
For example, friends of mine like to go climbing. They've build an app in an afternoon, just for their group, to keep track of what tracks they finished and how they liked it. I think this is fascinating. I wonder what would happen if students would learn to build simple apps like that in school, instead of Excel?
"Now imagine we change that drastically, to 100 of 1000 or even more. People will start to build their own little apps to solve their own little problems. And if these break, nobody cares, but them."
Both Danny Goodman and Bill Atkinson have stated publicly that they got programming questions from every kind of person: small business owners, school teachers, ski bums, taxi drivers, etc.
Hypercard let non-programmers build simple apps, and so a lot of apps were created that would never have been created if app-creation was left in the hands of professional programmers. And many of those little apps were tailored to what the creator needed at that moment, in a way that could never happen if a professional programmer drew up some spec.
It's ironic that the tool that has replaced Hypercard is in fact Excel. Excel makes it amazingly easy for non-programmers to develop simple applications
> They've build an app in an afternoon, just for their group, to keep track of what tracks they finished and how they liked it. I think this is fascinating. I wonder what would happen if students would learn to build simple apps like that in school, instead of Excel
To be pedantic, Excel would be the perfect tool for writing an app like this. A few columns, a quick and dirty VBA GUI... Bob's your uncle
Although I agree with sentiment of your post, actually the publication date of a novel and the release date of new furniture lines are very important [the same with music, movies and just about anything where we have enough data to do trend analysis].
I'm not sure about the novel and chair examples, but the issue of evidence of every early stage failures in general is interesting.
The other proto-pedias and Minecraft-could-have-beens never happened at all - for some people, they stayed on the drawing board and never made it to the web - so there's no evidence of them to point to at all. They left no trace.
You are very right in saying this, but I would have to contend that there is a large scope for quick-to-market apps and small, simple-purpose apps.
In addition to that, it seems that many apps created today themselves have a limited lifespan, after which they are replaced by something else (though you can never tell upfront).
The other side of the coin is building something with the intent of maintaining it for many years, only to find it is obsolete within 2. (Meaning that the "right tool for the job" cannot always be determined correctly in software development).
I wish people would focus less on fast development and more
on something that better maintainability 2 years down the road.
Hoodie focuses on the data model, not the application logic or UI. If there's one thing I want to be consistent and maintainable and to have a really solid support stack under it- if there's one area I value maintainability and consistency- it's the data layer, which this very much attempts to provide a consistent development experience for.
An app built with hoodie I could see as being maintainable- even as you gut the entire stack and drag your rails app to flask, or your php app to java, hoodie has defined only the kinds of objects and patterns of accessing them that you'll reuse in whatever your actual app stack is. Hoodie targets the data, which is the thing. Everything else? Who cares. Here's to maintainable data.
I haven't had time to look at try out the walkthrough, so here's a kind of dumb question that wasn't answered explicitly in the docs: how does it work with existing front-end frameworks, such as Backbone, Angular, Ember, etc?
hoodie is a JavaScript library that runs in your browser.
It gives you
user authentication
data storage and sync
sharing
emails
and so much more
None of these things are really handled by the client-side frameworks mentioned above, so I'm wondering if Hoodie is basically drop-in easy for existing projects.
at first, Hoodie is just a Store for data. So instead of Backbone, Angular etc storing their data via AJAX on a server, or in LocalStorage, we can very simply build adapters for all of the MV* frameworks to use Hoodie as their store.
This way, you get accounts and data synchronization for free. And offline ;-)
Emails and other future modules like payments are just extensions of the available JavaScript API in the browser.
I have not read any more than the intro linked, but I am kind of "concerned" by the big claims
Behind the scenes, Hoodie takes care of account creation,
email validation and all the boring backend tasks you have
to think of.
yeah, but how. You still need to do account creation, even if you are storing the hoodie-account-object locally in some fashion.
Am I being dense, or is this like a nice client framework that still needs to talk AJAX to the backend, so still needs to allow oAuth, so still needs to change code if the API changes, and so on.
"We just want to make apps, and add billing by Monday" is a bit worrying as a goal to me...
edit: oh, so this stubs out the server side at the client ?
It seems like the same amount of work to keep backbone models in synch with the server as it does to keep hoodie? What do I gain?
Hoodie comes with a backend that does all that stuff that can only be sensibly done server-side for you.
> edit: oh, so this stubs out the server side at the client ? It seems like the same amount of work to keep backbone models in synch with the server as it does to keep hoodie? What do I gain?
I'm looking for someone to translate this into an angular adapter and combine it with Yeoman's 1.0b4's AngularJS generator, among other things to test out hoodie. If you were already going to do it, might as well do it sooner than later and get payed for it! :-)
I posted the job on Elance, Freelancer, and Odesk if anyone is interested.
I have a question for experts in Hoodie and/or other similar projects (like Meteor):
How are database connections managed? If every client (browser) basically talks nearly directly to the database, don't you end up with of thousands of database connections?
Large number of connections is not a problem by itself, and servers like nginx handle them well, but I am not aware of a database which will feel comfortable being exposed this way. MongoDB, for example, really starts struggling with 2K+ connections. PostgreSQL can barely handle a hundred.
Is this why CouchDB is used? Is connection pooling used on the server? Or maybe the connections are short-lived and never persistent? Basically, how does this work with large number (say, 10K) of concurrent visitors?
As long as the underlying OS can provide file descriptors, CouchDB can handle the concurrent connections through the magic of Erlang, both persistent and shorter lived.
There are obvious limits to this, but the Hoodie architecture allows easy scale out (more DB servers and manual sharding, or a dynamo-like BigCouch, more workers etc.) that we’ll get to making use of once Hoodie apps become that big.
Never attribute to hipsterism what could adequately be explained with picking priorities :)
The hoodie backend works anywhere node and Couch work (anywhere, really), it’s just that the local dev setup with the fancy domains and everything is tailored to Mac OS X. We have Linux support in the works.
I'd say OSX is pretty mainstream for web development in my experience. If Hoodie was only available for the Atari ST then perhaps it would be a hipster framework.
are you on Windows or Linux? Would be great if you could help us to migrate it.
hoodie is basically CouchDB + node.js, nothing fancy. We just do local DNS magic, creating *.dev domains like http://pow.cx/, that is only compatible with Mac atm
I'm on windows 8 primarily, but I also run osx and mint. I'm going to check this out on osx first, if all is good, I can help ensure windows compatibility.
What do you think the probability is that someone considering using this framework is also running OS X? I'd say it's pretty high. BTW, wouldn't you say associating OS X with hipsters is a bit outdated?
Nice presentation. In the description of back end modules, you mention payments as a future capability. What kind of payment service would you be able to work with? (I don't see how you could handle card details in the Hoodie architecture). Thanks.
we definitely won't send any credit card information to a hoodie server. Instead we use a service like stripe, but then receive the payment notifications or errors, which lets us notify the user if payment worked or not.
How do you handle permissions by the way? is it the same concept as Firebase's? how do you handle data validation? (e.g. preventing users gaming the data, e.g. saving all their stolen ebooks in your system by using a console, or just cheating in a game by doing illegal moves?)
permissions is handled directly by CouchDB. Every user has an own database that only he/she has access to. Synchronization between accounts can be done via shares.
We don't have validation at the moment. You can store whatever you like, it's just JSON. We could add validation at some point, but it's currently not on our agenda.
We try to keep it really simple at the moment, getting the core right, then empowering others to build modules on top of it.
I’m a Unix dev too, check out my beard in the linked video :D
Like mentioned elsewhere in this thread, other *nixen support is in the works. Hoodie core is platform agnostic, we just chose Mac as the dev environment because we figured that was what most people use. We are definitely not exclusive and would love to see any help towards getting it run on other systems :)
so sorry that we didn't get to it yet. Hoodie is certainly not OS X exclusive, we just have some DNS magic that helps developing locally, that's Mac only at the moment.
Hoodie is just Node.js & CouchDB, you should be able to run it without too much trouble. Try to follow the instructions here: https://github.com/hoodiehq/hoodie-app/issues/35 Were happy to help
This looks pretty optimal for a little app I'm building. I've been bolting bits to Sinatra and using it as a half-API-half-template engine for a frontend I built, this looks far cleaner. I'm curious about security though, what's stopping a user from plugging in random data via the console?
1. Even if the users are idiots you shouldn't let them ruin their own application experience.
2. I would never put anything remotely connected to security and user privileges in the same storage accessible by users, so I would have to set up a separate service.
The distinction here is that Hoodie is supposed to free you from dealing with servers, but that is currently limited to scenarios where you have users with uniform access privileges and no concerns about users messing around with their database information. So until they add modules most projects will have to get down and dirty in the end if they want to attach any kind of privileges to users. In both points 1 and 2 you need to have some server side logic beyond Hoodie.
Hoodie can only promise to free you from worrying about the backend by providing one that you can just use.
The sharing module e.g. makes heavy use of server side logic and database security and access control features. The Hoodie frontend just makes it accessible to frontend devs.
hood.ie has a different approach. Meteor brings backend logic to the frontend. hood.ie tries to hide the backend entirely and provide an API that feels natural for the frontend environment.
The JavaScript API is what we care about most. We do currently have the hood.ie backend implemented in Node/CouchDB, but it could be everything, really. The frontend developer doesn't care.
Berlin and Zurich seem to be doing great lately. I'm also planning on building a startup in Zurich in the coming months. How about a Hacker News Zurich meetup?
ZH citizen and heavy HN user here. We've got lots of talks in our company and a co-working space to go along. Would be awesome to host a HN meetup here. Ping me if you're interested in doing something together.
It's awesome when you read posts on HN about interesting things and realize your friends were involved. Congrats gr2m!
(P.S... +1 for an Angular adapter =))
I think promising "we want to enable you to build complete web apps in days" might be promising too much. Programming is difficult, there is no silver bullet, and a different data store certainly won't change that.
Interesting choice of couchdb. Any reasons for that compared to the alternatives?
If you want to have a local DB in the browser, and a remote DB in the cloud, your choices are basically CouchDB or a ton of pain reinventing the wheel. Replication and syncing are hard.
In other words, for the very specific problem they're trying to solve, I don't think there ARE a lot of alternatives except "abandon local storage, and work directly against a DB in the cloud".
I agree that there is no silver bullet for apps, and Hoodie does not try to be one. Our main goal is to enable as many people as possible to build real, data driven apps.
Think of it this way: if you're able today to build the frontend of an app in days, then you've all it takes. Our maxim is: if you get jQuery, you should be able to build apps with user accounts, data synchronization, emails ... the basic, boring stuff.
hoodie strikes me as more of a local optimum rather than a global one. In other words, it's not a universal silver bullet and wouldn't be suitable for all tasks. But for the tasks for which it is suited, perhaps it can eliminate a lot of boilerplate.
[Disclaimer: I've spent a total of 5-10 minutes reading about hoodie.]
We designed Hoodie with very few, but very specific use cases, to get the core right. The more use cases we’ll add, the broader it will get, but we have no intention of pleasing everyone.
We have our own little lightweight version of PouchDB that isn’t really a version of PouchDB. It works well for now, but we might migrate to Pouch later.
Yeh we should definitely work on this when you are in town :) the storage part of pouch is nearing being complete and stable, the server infrastructure was next on the list (after docs etc)
we currently use localStorage. But we <3 PouchDB, just didn't get to it yet. Shouldn't be hard to change that, or to build your own store wrapper providing the same API
I know everyone hates that question.. (I hate it too), but what about non-js users or, maybe more importantly, SEO or crawlers? Honestly, this is what is holding me back to go pure client-side for most of my projects.. I know there's the phantomJS server-side hack but how does that work in practice?
Also, something else that really trouble me with javascript client-side is that, often, when something bugs, everything just stop working. I.e. links don't work anymore, button obviously don't.. Is there a better solution than a hard-refresh? Would a first-level try/catch solve this problem?
> but what about non-js users or, maybe more importantly, SEO or crawlers
Hoodie is more of a tool for applications, with user authentication etc, so SEO is not relevant here. I wouldn't build a public website with Hoodie.
> Also, something else that really trouble me with javascript client-side is that, often, when something bugs, everything just stop working
Yes. That's something we have to handle if we want dynamic web apps build on web technologies. But there are great tools today that help you 1. prevent JavaScript by automated testing in all the browsers / OS 2. track JavaScript errors, e.g. with errorception. And 3. todays browsers are more relaxed about errors, they try to continue running the app, even if one function errored out.
> Is there a better solution than a hard-refresh? Would a first-level try/catch solve this problem
If you can keep the state of an app and store user's data immediately, there is no big problem to reload the page, I do that in several occasions at minutes.io, the user usually doesn't realize, it's very fast.
I know as tech people we're strong about using latest technologies and we like to think users are like us, but sadly, this is not the case. Actually, it really depends of your market.
"the web is no longer just text"
Although I can't deny that statement,I find it hard to convince myself that browsers can't read your page because no JS engines are running. I see it a little bit like emails.. yes, more recent clients give you a better interface or better features, but if I'm emailing someone who has a very old phone with very basic email support, I'm hoping that this person would still be able to read my message.
I'd love to see what would happen if HN turned JS-enabled only for a day.
I run noscript. If I encounter a site that's broken or useless for seemingly no reason (e.g. not an interactive thing like Google Maps), I often don't even whitelist; I just leave. Relatedly, I've stopped clicking on Photobucket or Blogspot links.
I kinda resent being expected to run your pile of arbitrary code just to render static text and images on my screen—something that worked just fine without JS twenty years ago.
While I admire your principled stand - you are in a dwindling minority so there isn't much reason to factor in people such as yourself when making technical decisions.
The SEO issue is a stronger argument but Googlebot now seems to be executing javascript in some cases so even that might cease to be an issue.
It's not just people like myself; it's people writing one-off scrapers, people writing new search engines or browsers (Google is not the entire universe), everyone when you forget a brace and break all of your JS, etc. The Web is not and has never been merely human beings sitting at a keyboard and using one of three known GUI browsers.
that's your choice, but don't pretend devs will change all their stack/development approach to take into account your refusal of using a technology that has been around for 15 years.
while (true) {
hoodie.account.signUp(_.uniqueId() + '@gmail.com', _.uniqueId())
}
great effort though, but I still don't think you've got client/server separation right.
we've built something very similar over a year ago: an offline client db backed by memory/websql/indexed-db that replicates with dynamo-db so the app works offline. Looked at Couch/Pouch solution but decided against Couch for performance reasons.
Might open source it later once we figure out the user management/security bit. I still find the existing solutions (including Firebase) not ideal.
You have the option of requiring user confirmation via email (right now users are auto-confirmed by default). Non-confirmed users only write to their localStorage (so they can still use the app pre signup), but nothing (apart from their tiny user object) is synced to the remote DB, nothing is parsed by workers, etc. Then write a worker to cull unconfirmed user objects after a while, and you should be fine. I don't know about additional throttling though, but it seems very doable.
Disclaimer: this isn't my area of expertise, but the rest of the Hoodie team are quite confident about this issue, and I trust them. The other two are currently travelling, but if you have more questions, I'm sure one of them will get back to you with more in-depth info.
I was thinking there might be some sort of throttling/banning mechanism within the signUp function itself, but then being client based that could be easily circumvented unless there's some server side logic aswell. I wonder whether there's a sane way to handle this client side short of minify and/or obfuscate the code and hope for the best.
Apologies for an off-topic question - how did you go about registering a .ie domain name? I've been wanting to register one for a while, but the last time I looked there was a requirement to show proof of residency (or a business address) in Ireland.
Edit: The reason I ask is that it seems like the developers are living in Zurich and Berlin, not Ireland. I live in SF, so can't fulfil the residency requirement.
No problem :) There are actually more than three of us working on Hoodie, one (actually two by now) of them an Irish national. The about section just lists the core team, if you will.
That makes sense. I guess I will just have to go without until they (eventually?) open up registration to foreigners! Thanks for taking the time to reply.
I'm very interested in using this together with emberjs, but honestly I'm having trouble understanding what Hoodie actually does. Why do I need Hoodie to go with my CouchDB? Is it only for the local storage?
In the current implementation, CouchDB takes care of it. In case of a conflict, both versions are kept, with one winner. The other version can be recovered and the conflict resolved, both automatically with a worker process or by the user.
I see you've already mentioned switching to PouchDB on your backend, and you're already using CouchDB for the cloud, so it seems like you're basically competing straight up against Kanso. So.... How is what you're doing different from what Kanso does? Why should I use Hoodie over Kanso?
I see this code in your docs:
Very nice. But Kanso has: Seems like six of one, half a dozen of the other. What's your killer feature? Kanso has shaky documentation and is currently more aimed at couchapps (they strongly assume that you'll serve your frontend code from CouchDB), but I'm not sure how that really differs from Hoodie. :)If we take the overall architecture as a given (browser based app, PouchDB or equivalent for local storage, syncing with CouchDB in the cloud, and node workers handling the backend work), what's the focus of Hoodie that sets it apart from Kanso?
(In any case, nice stuff. I think the concept is great, and there's definitely room for several competing frameworks in this space.)