My second reaction: what happens when the magic ends? When I was new to Rails, I really loved how easy it was to get started with scaffolding, a nice DSL for specifying relations, nifty form helpers. However, the first time I veered a little off the golden path and wanted to do something a little more complicated (that was not supported by the form helpers at that time), I ran into a huge wall. I found out that I actually had no idea how Rails worked. The magic had hidden all complexity from me before, and now the magic failed me and I didn't know where to start doing it on my own. Rails has matured a lot since then (it also dropped some of the magic in favor of more explicit behavior) and my understanding of the framework has grown with it.
Meteor looks even more magic to me. There is so much stuff happening. What if I plan to do a lot of dynamic updates and I want to defer live-updating because I am doing mass-updates? What if I need a hard division between client/server code? What if I want to share the backend with an iOS app or something? Can I use this to build an API? The documentation does not yet fully answer these questions for me.
I now need to buy-in to the entire (framework) mindset to progress which slows things down (because it doesn't necessarily match my own way of doing things).
Had the same experience with cakephp. All those freebies which eventually caught up with me.
Ties in well with the whole libraries vs frameworks debate. I'm now in the libraries camp.
I wonder if Meteor packages are decoupled from each other so that I can pick and choose which ones I'd like to use in my project.
Turns out there really is no such thing as a free lunch.
That's interesting, that idea could be folded into the technical debt metaphor.
So, by taking on a library or framework, I get a head start by just using it, but take on the knowledge-debt of not really knowing how it works. And when you get to the edge of what it can do, you either pay off the knowledge debt by learning how it works, or you throw the whole thing out and use a different library.
An ORM also seems to lower the amount of configuration it takes to get development databases synced. It's not as much of an issue for an experienced dev, but a designer or new team member would need help.
I have had to learn AREL, the relational algebra used by ActiveRecord, in order to do more advanced queries. That's analogous to learning SQL in more detail, but I'd still take that in a heartbeat over writing raw SQL. The ORM automates things like tersely expressing the object associations I've built, leaving room for fewer syntax mistakes.
*The ORM automates things like tersely expressing the object associations I've built, leaving room for fewer syntax mistakes.'
Maybe it depends on the system you're working on however mitigating syntax errors seems like a small benefit. For me the SQL for most projects is fairly static i.e. once a given set of queries has been defined and tested they can lie there untouched, so once I nail a query and it performs the way I like I hardly ever need to go back and touch it again. However the performance penalty of sitting behind an ORM is ever present, for each query (at least for a cache miss). Personally I just don't like having 100s of lines of code sitting between:
model->get(('model.field' => 'value'));
and actually receiving the data.
It just seems so.... unnecessary.
of course YMMV, and perhaps the benefits kick in when you're working in a team (I'm not).
In the earlier days without powerful frameworks I was really developing: figuring out which pattern/algorithm did the best job. Nowadays when working with frameworks I feel dumber and dumber—before thinking myself I just google API calls and put them together. I am not trying anymore because I don't know where to start and everything is already solved on Github. This magic feels sometimes so meta and boring because I am more into consuming specs, tutorials, screencasts just gluing things together instead of real programming.
Just diving into JS and Node.JS to get this "go-kart" feeling back. But I guess the more magic involved the faster the development and Meteor admittedly looks revolutionary.
Then came factories. Factories were super-effective, they produced a lot of things fast, and cheap too, and the level of skill required from a factory worker was in no way comparable to the skill of a proper craftsman.
I notice the same tendency in software development today - we, the software developers, are more similar to factory workers mindlessly sticking together parts than "ninjas" or "rockstars". Of course, the process of software development today is not entirely like a factory, but I sense it's moving towards there. And if it is, programmers will become something cheap and fungible. Imagine hiring illegal immigrants to write an iPhone app :)
- How do you guarantee that all those pieces you just glued together will work with high availability? That they'll feel easy to use and consistent if exposed to the end user, or easy to maintain (future developers) and operate? Quite some work is often needed to have all that.
- User expectations are getting higher. You could get away with a certain quality and usability of software in 2005 that you couldn't today. Had a desktop software in 2000? In 2006 it was a desktop software and a website, now it is a web app, an iphone app, an android app...
- What you are describing is putting together a solution. Might make sense in many end user scenarios, web apps, etc. For things like automotive and other, low level programming is still needed - though abstraction is also slowly making its way in there also.
I love gluing things together that do the nitty-gritty stuff for me because it lets me focus on the big picture of making things happen, which is what I love about programming.
Oh FFS, this is not revolutionary. It is an attempt to wrap things in a friendlier package, while at the same time making something horribly insecure as a default install. Ruby on Rails did that a long time ago.
Better examples of revolutionary:
Or even in programming:
Try not to throw words like "revolutionary" around.
Seems one of Node's primary style advantages (async non-blocking style) has been eschewed.
For an user of a product/service, this is bliss. For a hacker, it might get tricky.
Now, considering the fact that this is all open source, the huge wall you’re talking about is just a matter of perception. You’re free to debug and go with it as low level as you need.
It’s also worth observing that Meteor is not presented as a “framework”. The comparison with Rails was introduced here, on HN. They say it’s “a set of new technologies”. I guess this implies loose coupling, which is a healthy paradigm when applied to components/modules/objects of a development ecosystem.
If only Meteor were nothing more than a proof of concept, it would still be mind-blowing.
We've got a lot more stuff coming over the next few months, and if there are particular things you'd like us to do/prioritize, I'd love to hear about them!
If I were to use Meteor for a partially closed source app earning around $1000 per month, how much could I expect to pay you? From the FAQ:
> If the GPL doesn't work for your project, get in touch (firstname.lastname@example.org) and we will write you a commercial license to your specifications.
This is no way to answer such an important question that most potential users will have - let's face it, there's a lot of people who will want to use this for open source, but a lot more who will want to use this for commercial apps. To ensure Meteor's wide-spread acceptance, you need to clearly answer this question on FAQ page, and not with "we need to have a conversation".
Their preference is that the library be used under the GPL. That is their choice and it is a fair enough choice. But they also acknowledge that some entities can't accept that for one reason or another, and are willing to be flexible. Very flexible: instead of saying "this is our commercial license, deal with it" they have said "tell us what you need, and we'll see what we can do".
(To be clear, it's not that I begrudge the author's their right to license it how they want. I simply think frameworks like this are too central and important to have any uncertainty about licensing associated with them.)
You guys have done a great job at making a true reactive system that doesn't burden the developer with complex binding/event systems. I also like how you guys have avoided the persistent store impedance mismatch :-)
overall, i'm excited to try this out. great work.
Anyway, thanks much for the support :)
We got pretty comfortable with the commodities the Rails environment provides, so it's understandable some people would rather just add another layer (Backbone/Spine) and avoid the paradigm shift.
The advantages of attacking new problems with a new toolkit can sometimes appear less beneficial, because swapping out the current mega-stack for something slimmer inevitably leads to reduction in features (ie.: You end up making another `view` in place of a collection partial. Mailer, gems/middle-wares, etc..)
That said, despite getting pretty comfortable with Rails/Backbone|Spine, I'm in the pragmatic camp, looking forward to tackling a few side-projects with Meteor.
I haven't felt that way about Node.js or even its higher-level frameworks like Express or Batman. This feels like "The One", even though I've been absorbing the docs and screencasts only for the last 20 minutes.
Here's my concern: If I use Meteor as it is intended, and I also want a my application to have API, I'll have to re-implement all of my logic server-side. This seems like a step back, unless I'm misreading things.
I'd love for the Meteor team to address this. Does Meteor make writing APIs easier?
: I'm aware that much of the "client-side" code is executed server-side too, but that's due to Meteor's magically transparent client-side framework.
We couldn't go into everything in the video (it was already too long!), but the piece you're missing is Meteor.methods(). This lets you define methods, which are functions that the client can invoke on the server by calling Meteor.call(). If you choose to also ship the code for these methods to the client (it's optional) then you'll get full latency compensation like we talk about in the video. (This means that Meteor can latency-compensate arbitrarily complex functions, like "read this record, then add 2 to X if this field is false, else create a new record Y.")
In fact, every Meteor app already has an API :) Your Meteor client can connect to any other Meteor server by calling Meteor.connect(), and can then subscribe to any realtime datasets that that server is publishing, or invoke any methods available on that server. This is done with a protocol called DDP, but we could map DDP to REST fairly easily.
What about a non-Meteor client connecting to a Meteor server? Does that client have to understand DDP, or is that what you mean by "we could map DDP to REST" - there's no support for non-DDP clients now, but it's planned?
The latency compensation and Meteor.call() interface sound great, but what I understood by the GP's question about an API was "how can other clients besides mine talk to my app?".
Edit: to clarify I specifically meant a front-end tool to help me leap a level in personal development skill.
I am using backbone for a while and it's backend agnostic which is great !
This one have more resemblance to knockout.js( I am talking client side) and it's MVVM design, backbone doesn't even try to approach that problem.
- meteor <- the low level of the high level :)
You use meteor as a foundation to build your application as backbone is more or less your application itself. Think "CoreFoundation" vs "AppKit" kind of difference.
From the docs:
Node.js is very low level and is more analogous to Ruby than Rails.
With client-side DB access, and eventual consistency baked into the platform as a core feature, you can't just throw "if current_user == object.owner" in your controller and call it a day. I'd love to know what they're thinking here.
If auth is baked into the framework, then maybe they can give you basic row-level security for free by maintaining a user table with auth tokens that's only accessible on the server side, and passing the auth token along with every database request, and maintaining a meteor-controlled owner key on objects. But that doesn't help if your security checks are more involved. I.e.: "if object.owner.friends.includes?(current_user)".
I'd love to hear what they're planning for this.
Meteor has data validation already, though not user authentication.
The model is that the server chooses what data to expose for reading, and the server also performs the real writes against the database. The client only simulates the mutations. You can define your own mutations ("methods"), and have them simulated on the client or not. If access checks pass on the client, they still might fail on the server. The server can run additional or different code, and you can hide this code from the client by putting it in the "server" directory. If the method results in different side-effects on the server than on the client, the differences are synced down to the client -- exactly what you'd want.
However, in new projects, to make things easier for first-time Meteor developers, we auto-publish all server data. To turn this off, type "meteor remove autopublish". That is, auto-publishing is the default for new projects but not the default for meteor overall.
Anything can be made to look easy if you're ignoring real-life issues. Heck, you can do seamless client updates by refreshing all the pages on my website every second. It will be inefficient, but it is easily doable.
For arguments sake, if we say there are 20 "real-life issues" then all web frameworks tend to address up to 12. New frameworks are created to address some of the issues of the past frameworks but with regressions in other areas.
Recently, I've been using Lift which happens to be very strong on most of the points you named, yet in retrospect, I don't feel it made web development easier overall because it has other deficiencies.
Even when I knew very little about web development, the principle of not accessing the database on the client side seemed obvious and very important.
This trick is already in use in a lot of places. If you click an upvote on Reddit, it doesn't do a complete round-trip, it just increments the count in place, and then issues a command to the server to do a "real" increment. If, in the meantime, someone disabled your account, then there is a disagreement. But it's obvious that the client's idea loses.
Eventual consistency is in mainstream use on the server, with frameworks like Cassandra et al. This is just a generalization of that to the web client.
I understand the argument that this type of lying is "ok" for the sake of responsiveness, since 99% of the time the data will be accepted.
That argument isn't valid in my eyes.
When I update the users screen to reflect the data they entered, to me, that's an indication that I accepted the data. If I were to first show the data as accepted, and then show an error message if something went wrong, I am undermining the trust the user is placing in my software.
It's almost like the software is confused and inconsistent? Kind of like a person who says one thing and then says something entirely different within a few seconds?
With every software suite, there is an implicit promise that it won't try to mislead me. I don't use software that lies to me, I don't expect others will use it either.
The API should be carefully programmed and kept abstract from the database, which might be one day changed completely. To keep the user interface agnostic, it should not be aware of how the database works.
Also, this is a recipe for mistakes, since giving the client direct access to the database (even if it is secured) raises questions about the data integrity and data protection. It is much more prone to abuse that way.
I'd love to see a CAPTCHA signup implementation with meteor.
Does that freak anyone else out? Where are they doing permissions checking? Who can run what database commands under what circumstances?
"Currently the client is given full write access to the collection. They can execute arbitrary Mongo update commands. Once we build authentication, you will be able to limit the client's direct access to insert, update, and remove. We are also considering validators and other ORM-like functionality."
So just for toys for the moment. Still, very cool...
Once the auth branch lands, you'll have two choices.
One options is, you can turn off the shortcuts entirely, and write a method for each scenario where the client would be allowed to write to the database. This gives you the same security model as a REST API.
Or, you can register an authorization-check hook with insert/update/remove, so you can vet each write and decide whether to allow it or reject it. This might work well with an ORM where you've marked some fields as writable, and some as protected.
At this stage, we want to give people a choice and see what they do.
Reads are handled in a totally different. You explicitly define what data a given client is allowed to synchronize down to their local cache (by using Meteor.publish to define certain database queries that are available for client subscriptions.) The client can run whatever queries they want against their cache, but the only stuff there is the stuff you explicitly let them subscribe to, so it's OK.
The rationale is that clients typically end up doing sorting and filtering on subsets of the database anyway, as they get more sophisticated and start caching data. For example, Gmail starts to need a notion of an email message on the client, to avoid going back to the server for the same message. When I worked on Google Wave I saw firsthand the complicated plumbing you need in order to do this in an ad hoc way. (They used GWT to share the model objects, but synchronized the data manually.)
You can also separate this facility from your database completely, and use it as a way of sending "data feeds" to clients; then use methods as RPCs.
There would need to be some sort of public key system for authentication, but in the end you are still compromising your data if the client gets hacked. There would have to be a database control layer for the final say, and thats called a server.
This would mean creating database user accounts on the fly for people, but it would resolve this problem (as long as the views are secure).
That may be because of programmer laziness or because of some sort of inherent impedance mismatch between web-scale apps and the user account system of most DBMSes, but it seems like a bad way of doing things.
I think it's high time we had a web-scale data store that actually had decent per-user access control baked into it right at the model level, to the point where a sane person could trust it to live on the open internet. It seems both possible and desirable, but maybe I'm missing something.
I realize that there are huge scaling/throttling/DoS issues with, say, creating a new UNIX user every time someone signs up for your online meme generator, but that's mostly because UNIX wasn't really designed for millions of users on one box.
On the other hand, as an unprivileged user on a Linux box, you can't really do much damage beyond hogging resources and possibly spying on other people's poorly-secured files. If there's a bug and you do find a way to trash the system or escalate privilege, it's front-page news.
The problem right now is that every two-bit web app implements its own ad-hoc permissions system, often at the wrong layer of their stack. If it could be commoditized into a widely-used and widely-audited system, I think it would do a lot to improve security on the Internet.
(To open up a whole new unsupported argument, on some level the fact that one needs a key-value store, a filesystem, and a hand-optimized in-memory cache to build a reasonably fast web service smells like we're still making humans do a lot of things that a machine could do a much better job of.)
See https://parse.com/docs/data#security for more details.
I am curious about the security aspect of writing to DB directly via the client but I'm sure you've thought of it. I love the 'add less', 'add coffeescript' feature. Can't wait to try this out soon.
It's tough to find a pace that holds the attention of the experienced people while not losing the people that are new to web development.
If they want this to be the next Django/Rails/Express/whatever, where a large and enthusiastic community both uses and develops the product then the answer is fairly clear: No, this is not a suitable license.
If their idea is that it's an nominally open source project where all control and most development takes place inside the originating company, and the community mostly just pays license fees then: Sure, it's a great license.
Longer clarification: Django is licensed under BSD, Rails and Express under MIT. Why? Because these licenses allow me to use these frameworks for my own webapps, without making the entire client/server code base available. I can toss together a Rails-based contacts manager, or a Django-based todo list app, and let people sign up, and even charge a monthly fee, and NOT have to give the code to the entire app out.
Someone else linked to Sencha's discussion of the GPL and JS webapps, and it's highly relevant:
In short, as long as Meteor is under the GPL, I can use the framework for free, but I cannot let anyone use a webapp I create with it without giving away all the source code. Which means that I'm very unlikely to actually install and try Meteor on my next project. My choices are basically "ignore Meteor" and "pay a license fee to the devs". Meteor is awesome, and probably well worth the license fee (whatever it might be), but it's fairly obvious that this is a huge limitation on the frameworks potential for adoption.
(It's worth noting that both Django and Rails were actually developed for a project, and then released under permissive licenses, because the framework wasn't the product, and the dev teams didn't need to try and monetize it. Meteor is the product, and the dev team does need to monetize it. And, again, Meteor looks awesome! So I can fully understand why they chose GPL, and I fully support that choice! But it is worth noting the consequences of that.)
But even after twenty-five years, there are still people who think they have to make a code base proprietary to make money on it.
What do you think will happen if you toss together a contacts manager or to-do list app and charge a monthly fee, and give out all the code, just as the Meteor devs have given out all their code to you? Maybe some of your users will decide to run the app on their own server and stop paying you. Or try to compete with you. But probably most of them will want to use the site operated by the app's primary developer. And all of your would-be competitors are just free R&D increasing the value of your site.
What's so terrible about making web apps that are free software?
But open source frameworks and languages that try and give developers the maximum freedom to make whatever they want see much higher adoption, uptake, mindshare, marketshare, engagement, developer excitement, community participation, user-submitted bug fixes, etc., etc., etc. than ones which don't.
Start listing popular frameworks - how many of those frameworks are GPL?
Off the top of my head, I would name: Sinatra, Rails, Django, Flask, Backbone, Batman, Knockout, Tir, CakePHP, Symfony, Spine, CherryPy, web.py, Pyramid, Zend, and Brubeck. Of those, every single one except Brubeck is licensed with BSD, MIT, or some variant - and Brubeckmight be too; I couldn't find license info.
There's nothing terrible about making web app that are free software...but the plain truth is, people don't make web apps that are free software with frameworks that require that. They go pick one of the popular frameworks, which all have permissive licenses. You can be horrified if you want. :)
If I'm going to start a new hobby or side project, perhaps hoping that it'll get bigger someday, I still have no idea if, when or how it'll make money so it doesn't makes sense to contact Meteor to find out how much licensing is, but the doubt will always be there and as likely as not I'd just not use Meteor to avoid finding out too late that the commercial license terms are unacceptable for whatever I end up with.
There's also uncertainty about what exactly I can or can't do with a GPL licensed web framework. To some it's very clear that I'll only have to release my changes to the framework, to others it's very clear that I'd have to release the whole app as GPL. To me it's unclear either way and as long as that uncertainty exists it makes using it tough call.
If it's hard to justify doing a hobby project with those concerns, it'll be next to impossible to get buy-in on a new project at work. In fact, as cool as Meteor looks, and as much as I'd love to use it, I wouldn't even present it as an option.
I'd prefer pay for support or hosting or add-ons or whatever; as long as I know up-front what the costs are likely to be. I don't think a more liberal license will prevent the community from contributing back, Rails, Django, Node etc. being the obvious examples. I'd love to see Meteor take off and the devs obviously deserve to make money on it, but personally the GPL just feels like it's only huge downside to an otherwise very exciting project.
The majority of application code is run on a user's browser. This means it is distributed (or conveyed) to the user. Thus, you have to offer every user of the app a possibility to obtain the unobfuscated source code.
But there is more: The GPL requires you to offer not only some JS files, but everything needed to run the application. This would include everything on the server-side as well.
Sencha's take on this might also be interesting in this context: http://www.sencha.com/legal/open-source-faq/
They have been using a GPL dual-licensing model for their products for quite some time now and I suspect they consulted more than one lawyer.
With the current licensing it looks a lot like they are pondering a dual-licensing business: Offer a GPLed version for free and charge for commercial licenses.
Since the product seems to be very promising and since the GPL is not suited for many use-cases (web startups, freelance work for clients and even many inhouse developments), this might actually work.
Being a developer myself, I see nothing wrong with a bunch of other developers wanting to be paid for their work. If this is really how they want to play it, I wish them all the best. (If this really is the case, I find the current copy on their website a bit misleading, though).
On the other hand, if their goal is to establish a vibrant open source community around meteor, then I think they are on the wrong track.
To accomplish that you need to have a low barrier of entry for a) users of your product and b) contributors to your project.
The GPL, in the case of a web-framework that blurs the client-server divide, sets the bar quite high for both groups:
a) Other commenters seem to agree with me that every app developed with a GPL'ed meteor has to be put under GPL as well. The implications alone might drive some people away from using meteor, but even having to think about those things upfront can be discouraging. Some people do not like the GPL, some may not be able to work with it and some may simply want to be able to choose the license for their work themselves.
b) If they are offering a commercial license, it will not be easy for them to accept outside code contributions. They need to establish some legal documents (Contributor Agreements, maybe Copyright Assignments) and a process for accepting contributions. This makes it a lot harder to contribute than simply sending a pull request. They cite the MySQL business model as something they closely studied. MySQL has a particulary bad reputation for not accepting outside contributions. And this is mainly due to their dual-licensing model.
I strongly believe that Ruby on Rails became popular so quickly and still has an active community because of its liberal license (MIT).
Personally I am very excited about meteor and I wish that they reconsider using another license. It does not have to be MIT, even though that is what I would choose, but even LGPL would be better IMHO.
It pretty much kills the usefulness of this project.
Want to use google analytics ? - nope you can't.
Want to offer third-party oauth login ? - nope you can't.
Want to use like/tweet/+1 buttons ? - nope you can't.
GPL essentially means your website can't have anything on the client-side which is not GPL compatible. And from reading the Meteor guys website that's not what they intend, their choice of GPL was to ensure and changes to Meteor get contributed back and for that LGPL is a much better licence.
Meteor's own website violates the GPL licence of Meteor as it stands (assuming it contains code from any third-party contributor).
You're probably right about Google Analytics.
It may be true that they adopted the GPL without carefully thinking it over, or it could be a deliberate choice, as with GhostScript — whose commercial customers are mostly printer companies, who are willing to pay for a license so they don't have to send their printer firmware source code to all their buyers, which the GPL would require.
Sencha is another JS toolkit using the GPL to encourage people to buy licenses for proprietary use.
That's a fairly insane contention. The GPL license document itself would violate that.
> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
It's why it's now common place for GPL projects to licence the content part of their product under creative commons and the code under GPL.
If I use it in my project and after a while I want to charge for the service and I don't distribute all my code, am I in trouble? if yes, I forecast a slow and steady walk into the event horizon for this nice little thing.
If dual license is what they really want, I'm not particularly interested at this stage.
"The MPL has been approved as both a free software license (albeit one with a weak copyleft) by the Free Software Foundation and an open-source software license by the Open Source Initiative. The MPL allows covered source code to be mixed with other files under a different, even proprietary license. However, code files licensed under the MPL must remain under the MPL and freely available in source form. This makes the MPL a compromise between the MIT or BSD licenses, which permit all derived works to be relicensed as proprietary, and the GPL, which requires the whole of a derived work, even new components, to remain under the GPL. By allowing proprietary modules in derived projects while requiring core files to remain open source, the MPL is designed to motivate both businesses and the open-source community to help develop core software." - http://en.wikipedia.org/wiki/Mozilla_Public_License
You appear to be confusing GPL with the "GPL With Linking Exception" https://en.wikipedia.org/wiki/GPL_linking_exception
As the wikipedia article on the GPL notes, the problem is not so much the GPL, but how derivative work is not that well pinned down in copyright law itself. That it appears ambiguous in the GPL is a result of that. My rule of thumb is to approach it from the angle of "what is the software worth if you take out what is linked?" - If the software can function without the linked code, it may not be derivative, but the actual question is whether it is useful without the linked code. Whether it carries out the tasks that the user needs the software for without the linked code.
To me, it's more of a social question of how much the linkee is indebted to the linked.
So no, I don't think I am confusing the two.
I grabbed the source to see what it would take to write a mysql plugin. As far as I can tell, it's fairly well tied into mongodb. It seems this would benefit a great deal if the minimongo package had some sort of interface to build to for custom data sources.
I would love to add mysql-livedata, and a mini-sql of sorts. Or maybe even a simple mini-orm with basic insert / update / delete / select methods and a more advanced "WHERE" syntax (SQL cousin to minimongo's selector magic). Obviously the real magic is in the livedata synchronization.
Regardless, this is very impressive. Thanks for putting this together!
1: As far as I can tell, the server won't even run without Mongo, at least according to this in server.js: `throw new Error("MONGO_URL must be set in environment")`
2: An incredibly cool browser-based implementation of the mongodb query api
And yes a second time: to package a sql database, you would ideally have a 'minisql', a client-side cache with a sql-compatible interface. It's not strictly necessary though. There's no reason a DDP dataset couldn't use a different database API on the client and the server (say, sql on the server, mongo on the client), but if you did this and you wanted latency compensation, you'd have to write all of your methods twice, once against each API.
I am really excited about having sql support, but we tried really hard to defer anything that wasn't absolutely necessary for release :)
 DDP is the protocol that livedata speaks between the client and the server. In Meteor, the client and the server mostly keep themselves at arm's length. The client doesn't know that the server is made with Meteor, just that it speaks DDP, and vice versa.
This looks big. If nothing else, hopefully others will notice the code to support materials ratio. Its just as relevant for straight OS as much as it is for "businesses around GPL".
I'm really happy to see so many developers taking UX seriously, even if it's just for other developers. Quite a nice trend I'm seeing with libraries/frameworks these days.
Other then that: "holy shit" was the first thing that popped into my head after watching 2 minutes of the video. Excellent work guys!
A couple of questions for you:
1) What's the current and projected goal in terms of concurrency for the server? Is this going to be able to handle thousands of simultaneous connections? Is the Node server built on anything else like Socket.IO, or is it fully new and unique code?
2) How does this handle conflicts? ShareJS implements Operational Transformation to attempt to smartly handle conflicting writes from multiple clients (used in Google Wave, for example). Does this just do a simple timestamp-based overwrite? If so, how far down the tree? I'm not familiar with MongoDB (sorry), but would it be at the object level? Or the property level?
EDIT: I see that they mention OT on the people page: http://www.meteor.com/about/people, so excuse my ignorance about the people behind the work...
There's a bit more in the FAQ. Our approach is based on rendering the template as a string (it doesn't require building a DOM tree on the server and then serializing it.)
There's lots of overhead on both server and client side generating everything dynamically, especially on slow mobile clients. Testing it on fast machine on localhost creates visible delay in initial DOM update. On typical content/read-heavy pages it would make sense to always serve static pages as a fast as possible and update cache and client-DOMs on database write.
In my opinion, this is the missing feature to make Meteor revolutionary in the process of web development.
- Is a: webframework
- Depends on: node.js
- Comparable to: backbone.js knockout.js rails django
- Connects: ....
Does Meteor have any opinions on testing? I didn't see anything from a quick glance at the docs.
I love the idea of tighter development cycles. But as a Meteor app grow in size and complexity, won't it still need a test suite to prevent regressions?
Eager to hear thoughts on this from the Meteor devs.
So I agree, it would be good to hear from the devs about their opinions and thoughts on testing. (Is there a chance that they think the "realtime-by-default" approach lessens the need for testing?)
I'm no testing zealot, but I've found writing tests useful and it would be a shame to be frustrated by that missing piece in an otherwise great project.
You're 100% right though. We don't have a fully baked story yet for application level testing. It's something we're trying to push forward as soon as we can.
3) provide at least guidance/best-practices on client-side testing in the _headless_ scenario (no complex setup of remote machines with 3-4 browsers installed etc)
People may have their opinions about testing UI/client-side (is it end-to-end? is it worth? not testing using the browser no-go?, etc) but the fact that Meteor is leaning toward more code in the client-side will definitely put testing in almost make-or-break decision for a group of developers.
GWT with their MVP approach is definitely heading toward that direction and it is quite unfortunate that the client-side JS community out there haven't picked up that style (some brushed GWT off because it is Java).
Rails won the heart of many Java developers not only because of its simplicity but also because of automated testing.
So, show how easy it is to test the whole Meteor app and I (almost) guarantee you will get many more developers (especially those who skipped the Rails boat).
Don't get me wrong: I want to like Meteor. But I've found automated tests too important to my workflow to live without.
$ curl install.meteor.com | sh
Unable to install. Meteor supports RedHat and Debian.
Looks like an interesting approach. I always thought SocketStream would be the next big thing, but now I am uncertain. Nice that there are a number of frameworks though! :)
Oh, a CoffeeScript would be nice.
Meteor has some nice packaging and deploy stuff and the hosting is nice, but that to me is a nice-to-have.
Monetization: We're still figuring that out. For now, we just want to see what people do with it. We think a lot about companies like redhat and mysql, and how they have been successful while building open ecosystems.
- What's browser support like?
- Do you do synchronous AJAX to return results synchronously for DB operations on the client? It looked like newly created records' IDs were returned as soon as the creation code was executed.
- How do you track a reactive function's dependencies?
- Do you plan to make a run loop similar to Ender's to stage and order various recalculations and DOM updates?
$ curl install.meteor.com | /bin/sh
- Though it "feels wrong" (definitely with you there), is it really any different from downloading an installer package and running it?
- You can just go to install.meteor.com in your browser if you want to see what the script does (or just leave off the | sh), which is arguably better than other installation mechanisms.
Curious to hear people's thoughts :)
Do. Not. Require. Root. Not optionally, not sometimes, not "maybe". NEVER.
Your universe is ~/.meteor. Do not even think about touching anything outside that. It's taboo.
You clearly know what you're doing otherwise, so please get this one right from the start and save yourselves a lot of headache later on (cf. npm).
Install to ~/.meteor and provide the user with a small snippet to paste to their ~/.bash_profile ("source ~/.meteor/magic.sh").
I'm sure you have good reasons, I just can't see them here.
To illustrate, a few simple questions:
What if I need multiple meteor versions on the same system (different versions for different users/projects)?
How do I quickly switch between different meteor versions (often needed in fast-moving projects like this)?
How do I monkey-patch meteor to try something out and/or contribute back?
What if I absolutely must mix meteor with an unsupported version of node or other runtime dependency?
What if all my servers are SuSE Solaris95 but you only provide packages for Debian and RedHat?
How do I bundle my meteor runtime with my deployment?
If you don't want it in /usr/local, you can check it out anywhere you like and just run the 'meteor' in the top directory of the checkout, and it'll work great. This is mentioned in the README/on Github, and it's how we have been developing Meteor. You can have as many copies of Meteor on your system as you like, and they can even coexist with a copy at /usr/local. So for me, I type '~/co/meteor' to run the meteor that I'm developing on, and 'meteor' by itself to run the latest official release.
You can either build the binary dependencies (node, etc) yourself by running an included script, or if you do not, the first time you run meteor it will automatically fetch a prebuilt binary kit for your convenience. All of the binary dependencies are kept in a directory in the checkout and managed by meteor, meaning that each meteor install on your system can have its own version of node.
Finally, to your last point, 'meteor bundle' does exactly that :P In fact, right now on our deploy servers, we have live apps running that were deployed with a variety of historical versions of meteor, all coexisting side by side.
I'm sure we'll have to revisit all of this as the Meteor ecosystem gets bigger and more complicated -- when all of the packages do not fit in a 'packages' directory, and when there are more binary dependencies than node and mongo. But I hope that this helps to persuade you that we're not totally nuts, despite how sleep deprived I was when we recorded that screencast.
If you thought this was a helpful answer, could you repost the question on Stack Overflow (with the 'meteor' tag), and drop me an email (address is in my HN profile?) That way I could repost this answer there in case it might be helpful to other people.
I agree with almost everything you say; you seem to have it right then, I just didn't go through github so I never saw the README.
However, I would still argue that you should get rid of the packages, that's extra-work you could better spend elsewhere. For someone installing a dev-toolkit it's absolutely not too much asked to paste "echo >>~/.bash_profile 'source ~/.meteor/magic.sh'".
They have to do that anyway as soon as they need the latest git-HEAD because of some bug in the release version (we know how it goes, don't we? ;-)).
So by giving them the right instructions right away you save them this extra step. And you save yourself from making a bad (and false!) first impression on the older and grumpier devs like myself.
I do prefer packages installed to /usr. You can easily solve the multiple-version-problem: Debian/Ubuntu has `alternatives`; Gentoo has `eselect` etc.
With these tools it is easy to "slot" packages (Gentoo terminology): /usr/bin/meteor would become a link to /usr/bin/meteor-0.1, /usr/bin/meteor-0.2 or whatever. And it is selected by the `alternatives`/`eselect` commands.
And how do you bundle your runtime? Just install the darn package!
If you are using SuSE Solaris95 and all your software is packaged for Debian or RedHat, then you should consider switching to a distribution that makes it easy to install third party software using the native package manager. If that means building a deb or rpm, then so be it; it's not that difficult.
edit: Any by not every one, I mean me, running ArchLinux.
npm is out of my scope (in fact, even arch is)
I will also admit to disagreeing with the poster suggesting that it should all go to ~/.meteor. I kind of hate that type of install... at least when I don't have an option. I don't want to install it multiple times for different users, and I don't like "system" software in my home directory, I like to keep my user data there and back it up more regularly that /usr/local which is all things I can reinstall in case of disaster.
I guess the ideal would be to ask the user.
# add to $PATH
mkdir -p "$PARENT/bin"
rm -f "$PARENT/bin/meteor"
ln -s "$TARGET/bin/meteor" "$PARENT/bin/meteor"
As someone else pointed out, it also makes versioning difficult.
curl $URI | sh
Usually I download something, look at it (more to see how it works, not security, but I digress) and then sh it.
We know the Meteor guys well and are really excited about what they're building. It's very complementary.
We're both advocating a new paradigm in software development and the faster it comes the better.
I have to ask, I know that Mongo is all the rage these days, but how quickly could you add MySQL support? I can already see several uses for this, but I need to tie into legacy systems.
In terms of security (say a dashboard), could you just disable the system from being able to modify the DB and just read it?
I'm developing with socketstream right now, and I love it. I hope they are all successful.
1. Running complex DB queries on the client
2. The in-memory database cache described in the documentation using a lot of memory
3. Having little control over how often the client hits the server and vice versa.
I think in terms of security and flexibility the API may wind up being a lot more complex than it is now. Maturity aside, clearly an excellent idea and work so far...
(Full disclosure: I'm friends with most of the Meteor crew, very excited to see the wrapping come off!)
I'm not trying to put down Meteor, I imagine it's great.
It seems as though it has combined automatic polling and asynchronous updating together, and has removed traditional controllers and models. Yeah, it's cool out of the box and yeah, it's great for scaffolding and rapid development.
But really, it seems like another stack to learn and completely depend upon, and with codebase that 'liquid' it seems like it's much riskier than doing things traditionally. I'd be worried that it auto-injects into the running app.
Don't get me wrong — it's amazing. But what's the real, actual benefit here?
The MS frameworks were great in themselves, they even cooperated somewhat. But they ultimately lacked in the ability to support long-term development, you know, when people just drop out of the project and you suddenly need to bring someone in ASAP.
Keeping this in mind will allow the new generation of "web guys" do better than us, the "desktop guys".
I tried to install meteor using cygwin and hit "Sorry, this OS is not supported yet.".
For us it needs to support our live data feeds, inter-operate nicely with our PostgreSQL db (via our own API?) and if possible take advantage of our existing django application.
Our rendered dashboards need to be seamless and fast. I've been leaning towards highly cached static data to ensure this.
Any reason the GitHub link isn't above the fold? That's the first thing I looked for...
I'd also be curious to know if any of the live reloading stuff was inspired by Brett Victor's "Inventing on Principle?" Something about tools that let you create "live" seems to be in the air.
Also have you built a "solid" production app with it?
P.S.: You should provide your cli via npm. Go and register the name before someone else do. (It's not taken currently http://search.npmjs.org/#/meteor)
That combination seems like it could be spectacular, if it's at all possible.
Freaking love the name too (admittedly a non-trivial factor to me when deciding whether to investigate new technologies).
So node's not a cancer now?
EDIT: oh yeah, Backlift! (http://www.backlift.com) This is definitely an interesting trend.
But what I see here, is retrying what already have failed, only know the DB is even less protected. Doesn't seem very wise to me.
Seems to have a minor glitch with unicode characters? When you do Colors.remove() everything goes away, but the entries with unicode all come back? Somewhat strange...
Interesting tech, but db access from the client scares me to no end, call me a curmudgeon I guess.
On a related note, they took it down now.