Ok thanks for the information. A test suite should be part of the DNA of any framework in my opinion and the lack of prioritizarion shows the values of the meteor team don't really align with what I originally expected.
[meteor dev here] We actually have a great realtime test system that we use to test the framework itself. Whenever you save a file, your browser test console automatically updates to show which tests are passing and failing. It's great for TDD. We also have a script that runs a set of tests in the cloud, simultaneously on all supported browsers. I'm hoping we'll get that integrated directly into the test console eventually, so you get feedback on IE breakage in realtime. The current system's API (tinytest) is sort of janky, and before we bless something as the officially recommended testing system for 1.0 I'd like to see more experience with the framework and broader community input, which is why we've held off.
Where we are weakest is on tests that are driven by UI scripting. I was really impressed with the system we had for that at Asana and I'd like to do something similar in Meteor. But right now our priorities are opening up the package system and doing a round of server scalability work.
Don't underestimate the value of an early standardized test framework. This posting itself is a great opportunity to teach the community what a canononical Meteor application should look like and not having testing baked in makes the code a lot less readable.
I'd like to see what a fully decomposed meteor application is supposed to look like from an architecture perspective but every time I see an example it looks like this.
Testing frameworks force you to break things up resulting in a better overall design. This is what should be exposed to a nascent community if you don't want serious growing pains later on (look at the struggles the php world is having adopting testing and modular architectures)