Kinda like wallabyjs, but not hitting me over the head with a license :P
Over the past few months, that has changed entirely. Improved speed, lots of bug fixes, snapshot tests and general ease of setup with React projects have all really helped.
Jest is now the test framework I would suggest for anyone starting a new React project.
This isn't commentary on the quality of Jest. After turning off auto-mocking and instead only using the mocking when I really needed it, I ended up using it for a moderately successful side project and enjoyed the experience.
I use faucet for pretty-printing the output.
"Add support for tests that output Test Anything Protocol (TAP)"
I think there's definitely space for some local commands which graph, analyse, etc. a bunch of TAP, but I haven't found anything like that :(
I think there's room for one or two test frameworks that Just Work. Surely you'll lose some configurability, but mostly you don't need it.
As opposed to the alternative, which is dealing with technical dept, spaghetti, legacy code, etc...
Before I was mangling with Grunt and PhantomJS, but due to PhantomJS version being back ages I couldn't really test ES6 so I had to do a hybrid and running mocha in an actual browser, and the rest of the dev stack in grunt. Now I am able to do it all automatically. Not only that, but jest includes a browser by default which supports ES6 and an assertion library, so just with 'jest' I am doing the same that I did before with mocha, chai and PhantomJS (+ the pain of installing PhantomJS separately).
I am not so much into React, but I just fell in love with Jest. Testing will be something totally different from now on, thank you Facebook.
PS, it was a bit more difficult to integrate Jest into Grunt and I get it without the colors, but I'm sure I'll find a solution soon-ish.
- it uses regexes instead of globs, so you can't just give it the list of tests like you would with Mocha or electron-mocha (e.g. something like "jest src/* * /*.tests.js")
- it excludes paths that include "node_modules" and it's not just a default (which would be fine), it's hardcoded, so you can forget about local modules, or dependencies that use local modules
Fortunately, there are already open issues for both, so that might improve in the future :)
Edit: I had to add extra spaces in the glob example (so it's slightly incorrect) because HN formatting seems to prevent using "double star".
FWIW, Jest is insanely good. I've been using it for almost 2 years now through some of the rougher patches in its development and man, it's an extremely well thought-out system with a team of highly motivated maintainers. Facebook uses it for almost all their JS projects, so it gets a LOT of attention and TLC.
Many things that were poor defaults have been changed since v17, like getting rid of auto mocking. It's worth a look again.
* I agree regexes suck but that's all we had five years ago when it was started. Would love to move to a glob system incrementally but most of that will have to come from the community. I'm also still concerned about performance. We match things against tens of thousands of files a lot and regex seems strictly faster to me but I'm happy to be proven wrong or shown that it won't be relevant.
* There is one minor issue with create-react-app's recommended use of node_modules to split up things. I would recommend lerna for multi-package development ( https://github.com/lerna/lerna ) and we are hoping to put whatever fix in place here that will work well. There are workarounds but they aren't great.
Jest has gotten a whole lot better over the last few months.
It used to be slow, cryptic and dogmatic.
Now it's fast, transparent and open to debate.
Great job Jest team!
I would qualify that but I started this comment saying it was going to be frivolous.
If enough people care however I'll expand on this.
The unmock becomes very annoying after a while.
Since then, apparently, it has changed hugely, and is now fast, reliable, and pleasant to use. Or so I've heard; I have no particular reason to change back to jest, since mocha is fine.
But if you're wondering why you've heard about it being slow, it's because for many, many months jest was absurdly, unbelievably slow; simple "hello world" tests would take seconds, and even a medium projects would take minutes. And there was no watch mode, nor any ability to re-run failing tests, or re-run tests on changed files. It was absurd.
Edit: See, eg, https://github.com/facebook/jest/issues/116 opened 8 Aug 2014, and finally closed 17 Feb 2016. Quite a run.
Also, there were tons of bugs with the mocking and, especially, the `dontMock()` methods; many things could cause attempts to turn mocking off to silently, invisibly fail (or even more fun, to cause an attempt to disable mocking on one item to cause it to be silently, invisibly disabled on all items). It's amazingly hard to track down a failing test when a bug triggered by code in another test can cause items that should be mocked to not be mocked, and visa versa.
The speed is also very much improved!
They recently changed mocking to be opt-in.
1. What good books (or repos) that use JS can you recommend for writing good tests?
2. What advantages does Jest have over Jasmine?
Looks like jest tries to cover all three.
The painlessness comes with a cost.
There is a fantastic post by Ben McCormick about the up and downsides of this system: http://benmccormick.org/2016/09/19/testing-with-jest-snapsho...
Snapshots aren't the only feature of Jest; it is simply one assertion in our assertion library. There is a ton of other stuff in the framework that makes setting up a test environment and writing tests easier.
My philosophy for building a test framework is to build a good feature set to help you out in any situation but the user should be in total control. This allows you to find the best way to test your code which I've found to be extremely subjective. The choice of test framework and test methodology seems almost religious at times and I've deliberately tried to stay away from these conversations.
The Ben McCormick post does cover some of the issues with snapshot testing, particularly around lack of communication of developer intent.
If you wrote a unit test that asserted that a tree of data looked exactly a particular way, when the only correctness/incorrectness criteria was whether or not a particular prop was present on one of the branches, then you've written a bad test.
Using a snapshot means that your test fails when your code changes in ways that are still correct. It's a test that invites lots of false negatives, and the reason that is considered acceptable is because it makes it easy to update the test when it predictably gives you the false negative.
Normally writing tests that depend on the exact implementation details of the code under test is considered a bad thing. I can't help but think that there are better solutions to this problem.
Almost immediately after one of our teams started using snapshot testing, we had test breakages because an entirely different component that we used in an ancillary way had a semver minor change (added a new prop that was defaulted) and that broke tests (but not the app) in our component. Perhaps we're doing it wrong, but adding tests to packages that can trivially be incorrectly broken by your dependencies changing is fairly painful (and makes a mockery of semver).
> Almost immediately after one of our teams started using snapshot testing, we had test breakages because an entirely different component that we used in an ancillary way had a semver minor change (added a new prop that was defaulted) and that broke tests (but not the app) in our component. Perhaps we're doing it wrong, but adding tests to packages that can trivially be incorrectly broken by your dependencies changing is fairly painful (and makes a mockery of semver).
Snapshots should make you more confident when making changes.
In the case you just described they correctly caught that something changed.
Imagine a minor semver update that works in 99% of the cases but has a very tricky edge case scenario they didn't consider when bumping to the new version, and you're super lucky and you have that scenario in your codebase, but only in a single page of your app under a certain condition.
Snapshot failures highlights subtle dependencies between seemingly unrelated components, making you aware of it.
I think at snapshot failures as warnings: those are potential breakages and I can now check that everything works, and with a good coverage I know exactly what changed and where, so I can go and check case by case.
Once I'm absolutely sure everything is fine I can safely update the snapshots, which as you correctly stated is easy to do, I've just to pass a `-u` flag.
I do think though that neither of these should replace tests that test the actual postconditions of your code, and that the preponderance of false negatives and ease of resetting the canary discourages developers from spending the effort required to properly understand what is going on.
Almost all programming is about erecting firewalls to contain changes, so that as they ripple out from the change site, they hit boundaries beyond which things no longer need to be modified. As part of that, modules document what you can and cannot rely on. If you have a test that breaks because it's relying on some feature of a modules output that is not considered public behaviour, the bug is in your test, not the module that causes the failure.
> In the case you just described they correctly caught that something changed.
It is not the job of tests to tell you that something changed, but that something broke.
Perhaps I'd have less problem if people talked about them as canaries rather than as tests. 'Test' implies that there's something wrong if you fail.
It's not entirely clear to me why you'd want a headless browser though. Once you're already running a browser, you might as well run a full one, and also get the debugging capabilities.