Hacker News new | past | comments | ask | show | jobs | submit login
Jest – Painless JavaScript Testing (facebook.github.io)
202 points by huan9huan on Dec 8, 2016 | hide | past | favorite | 61 comments



I'm a big fan of Jest - if you're using Jest with Visual Studios Code, I'd recommend looking at my extension that gives you a mode IDE-like experience.

https://github.com/orta/vscode-jest


Thank you for this! I've been patiently waiting for someone more motivated and intelligent than myself to create a free inline test runner. You are my hero. Keep the project open source, free, and plop a donations button on the page and I'll be quick to open my wallet.


Tried it for the first time yesterday. It is pretty amazing and the productivity boost is crazy (it's one of those tools that changes the balance when you ask if testing is worth the time or not).


That's pretty darned rad!

Kinda like wallabyjs, but not hitting me over the head with a license :P


Jest used to have a lot of problems and was poorly maintained.

Over the past few months, that has changed entirely. Improved speed, lots of bug fixes, snapshot tests and general ease of setup with React projects have all really helped.

Jest is now the test framework I would suggest for anyone starting a new React project.


Agreed. We started using Jest in April 2015 or so and I hated it with a passion for well over a year. I can't remember exactly which version it was but it go improved by miles recently. Now my goto JS test library without a doubt.


Seems like it was a side project of some FB employees and it didn't get traction till a few months ago. Till then the project pretty much stalled.


Has the documentation improved? Last time I checked it (about a year or so ago) it was terrible.


It's but a click away (and yes it has :-)) http://facebook.github.io/jest/docs/getting-started.html


We've improved it some but there's more to go. If you have specific parts that you consider to be still-terrible let us know!


I have a strange history with JS test libraries. Of the 4 that I've used, I've fixed bugs in 3 of them within a week of first picking it up. QUnit, Sinon and Jest.

This isn't commentary on the quality of Jest. After turning off auto-mocking and instead only using the mocking when I really needed it, I ended up using it for a moderately successful side project and enjoyed the experience.


I switched to tape (the "Test anything protocol" from 1987) and never looked back: https://medium.com/javascript-scene/why-i-use-tape-instead-o...

I use faucet for pretty-printing the output.


For people who want to use tape and have TAP support and are using a Jetbrains IDE, vote for TAP support on this ticket:

"Add support for tests that output Test Anything Protocol (TAP)" https://youtrack.jetbrains.com/issue/WEB-20916


I hear you. I've set up testing for angular apps and whilst I don't mind the unit testing with karma/jasmine, when I've used Protractor I've found it unbearable; some functions are exposed as promises, some are not, some are resolved in the 'it' statement, some are not and you have to chain a 'then' and it's anyones guess as to how a particular function behaves.


After discovering TAP a while back I switched a bunch of tests to output in that format, but found that it can't actually be consumed by anything, other than pretty printers (too trivial to bother) or smolder (Web server; too much overkill).

I think there's definitely space for some local commands which graph, analyse, etc. a bunch of TAP, but I haven't found anything like that :(


I never had the need to process the output, I just liked the simple structure of my tests and the plan/count/end system.


Yeah, tape is nice. I don't think Sinon is really the same kind of thing- at least when I used it, it was for things like mocking XHR responses.


Automocking is off by default now, and a lot of work has been done to improve the developer experience in recent times.


Thanks for the work you & the team have put into Jest. While I haven't used it recently, I'm glad that it's out there as an option. The _ability_ to mock easily was always super great.


Have you used jasmine + karma? That's been, and still is, my go-to for solid and fast testing.


It's solid once you get everything set up, but that takes me a while. Of course I've only set it up for a couple of projects, using different build systems, and having tried other test libraries in between in order to minimize the chances of my remembering anything...

I think there's room for one or two test frameworks that Just Work. Surely you'll lose some configurability, but mostly you don't need it.


I have used Jasmine+Karma. It's good stuff. It doesn't have the the "awesome stuff baked in" feel that Jest gives but it also doesn't give you a headache when something baked in doesn't work as expected :).


There's no such thing as painless testing.


But it hurts so good.

As opposed to the alternative, which is dealing with technical dept, spaghetti, legacy code, etc...


Testing and technical debt/spaghetti code/legacy code are not mutually exclusive.


you're right. But testing bad code is really really hard. So usually tested code is somewhat better overall, just because it's the easiest way to do it.


If it ain't hurting, you ain't doing it right.


If you develop the tests along with the main code, it's usually less painful, except if there's a hunk of legacy code to integrate your new one with. Ok, this is probably the case for most developers...


Maintaining disorganised code hurts a lot more than writing tests.


I've been working on a React Native project and I really enjoy how easy jest is to use. It's hard to tell from this post though, is there something new/specific about it? Or are we all just agreeing that it's awesome?


I just migrated a project[1] to Jest. Now I totally think that Jest is AWESOME.

Before I was mangling with Grunt and PhantomJS, but due to PhantomJS version being back ages I couldn't really test ES6 so I had to do a hybrid and running mocha in an actual browser, and the rest of the dev stack in grunt. Now I am able to do it all automatically. Not only that, but jest includes a browser by default which supports ES6 and an assertion library, so just with 'jest' I am doing the same that I did before with mocha, chai and PhantomJS (+ the pain of installing PhantomJS separately).

I am not so much into React, but I just fell in love with Jest. Testing will be something totally different from now on, thank you Facebook.

[1] http://github.com/franciscop/superdom.js

PS, it was a bit more difficult to integrate Jest into Grunt and I get it without the colors, but I'm sure I'll find a solution soon-ish.


Jest sounds good on paper, it's great it exists and it definitely has potential and even improved a lot recently (and I usually try every new version that come up), however I have two pain points with it that prevent me from switching to Jest for now:

- it uses regexes instead of globs, so you can't just give it the list of tests like you would with Mocha or electron-mocha (e.g. something like "jest src/* * /*.tests.js")

- it excludes paths that include "node_modules" and it's not just a default (which would be fine), it's hardcoded, so you can forget about local modules, or dependencies that use local modules

Fortunately, there are already open issues for both, so that might improve in the future :)

---

Edit: I had to add extra spaces in the glob example (so it's slightly incorrect) because HN formatting seems to prevent using "double star".


Hey there, frequent Jest contributor here! Globs are under consideration (there's an issue from a collaborator on the subject.)

FWIW, Jest is insanely good. I've been using it for almost 2 years now through some of the rougher patches in its development and man, it's an extremely well thought-out system with a team of highly motivated maintainers. Facebook uses it for almost all their JS projects, so it gets a LOT of attention and TLC.

Many things that were poor defaults have been changed since v17, like getting rid of auto mocking. It's worth a look again.


I work on Jest at FB.

* I agree regexes suck but that's all we had five years ago when it was started. Would love to move to a glob system incrementally but most of that will have to come from the community. I'm also still concerned about performance. We match things against tens of thousands of files a lot and regex seems strictly faster to me but I'm happy to be proven wrong or shown that it won't be relevant. * There is one minor issue with create-react-app's recommended use of node_modules to split up things. I would recommend lerna for multi-package development ( https://github.com/lerna/lerna ) and we are hoping to put whatever fix in place here that will work well. There are workarounds but they aren't great.


node-glob was released in 2010


Thanks! That's great. Maybe you can help us upgrade then? Jest is fun to contribute to.


This might be a bit frivolous.

But

Jest has gotten a whole lot better over the last few months. It used to be slow, cryptic and dogmatic. Now it's fast, transparent and open to debate.

Great job Jest team!

I would qualify that but I started this comment saying it was going to be frivolous. If enough people care however I'll expand on this.


I have used jest and i have used mocha. I use mocha because I got sick of having to "unmock" everything in jest tests. People have complained that jest is slow, that was never a problem I encountered.

The unmock becomes very annoying after a while.


When jest was first released, it was incredibly, glacially slow, buggy, poorly documented, and seemed to be largely unmaintained. Critical features were broken and stayed broken for months. I was an early adopter and I got burned hard, having to rewrite everything in Mocha.

Since then, apparently, it has changed hugely, and is now fast, reliable, and pleasant to use. Or so I've heard; I have no particular reason to change back to jest, since mocha is fine.

But if you're wondering why you've heard about it being slow, it's because for many, many months jest was absurdly, unbelievably slow; simple "hello world" tests would take seconds, and even a medium projects would take minutes. And there was no watch mode, nor any ability to re-run failing tests, or re-run tests on changed files. It was absurd.

Edit: See, eg, https://github.com/facebook/jest/issues/116 opened 8 Aug 2014, and finally closed 17 Feb 2016. Quite a run.

Also, there were tons of bugs with the mocking and, especially, the `dontMock()` methods; many things could cause attempts to turn mocking off to silently, invisibly fail (or even more fun, to cause an attempt to disable mocking on one item to cause it to be silently, invisibly disabled on all items). It's amazingly hard to track down a failing test when a bug triggered by code in another test can cause items that should be mocked to not be mocked, and visa versa.


Jest has changed a lot lately. Back in September they changed the mocking to be opt-in. [1]

The speed is also very much improved!

[1] http://facebook.github.io/jest/blog/2016/09/01/jest-15.html#...


I always used the option to turn it off by default. What you're left with (instead of having to unmock loads of stuff) is a parallel test runner with and a utility that is capable of mocking most anything upon command.

They recently changed mocking to be opt-in.


Two questions:

1. What good books (or repos) that use JS can you recommend for writing good tests?

2. What advantages does Jest have over Jasmine?


RE: 2. Jest bundles Jasmine. It's primarily a test runner (like karma[1]) more than a framework. Jest includes mocking capabilities at the function level (like sinon[2]) and module level (like rewire[3]). It supports parallel execution, custom source code transpiling, file extension mocks (for webpack requires) and file watching. It favors large ES6 projects -- takes a second to start up.

[1]: https://github.com/karma-runner/karma [2]: http://sinonjs.org/ [3]: https://github.com/jhnns/rewire


We use usually use a mocha/chai/sinon combo for our testing. Can anyone that has used this make a comparison with that?

Looks like jest tries to cover all three.


I was using that too, and switched over to Jest + Chai Assert. It was too difficult for me to replicate features like file watching, run only tests with changes for faster feedback loop, transpiling typescript and es6 in the same project, asset mocking, starting and stopping timers, etc. I only really used sinon.spy() so traded it for jest.fn().


The last versions are for us better than the other tool. The snapshot feature is pretty neat.


Are you forced to use expect().to... or you can plugin any assert library ? (I get the benefits of expect but I still prefer assert style)


You can use any assertion library and it will just work. At one point I was using Jest with Chai when migrating a codebase that happened to use Chai for assertions.


Jest has taken great steps forward in the oast year and I see the momentum only going up. Kudos to the whole team!


Snapshot testing overconstrains your tests, and encourages people who see tests breaking just to rerun the snapshot without thinking too much.

The painlessness comes with a cost.


Hi! I work on JavaScript Tools at FB.

There is a fantastic post by Ben McCormick about the up and downsides of this system: http://benmccormick.org/2016/09/19/testing-with-jest-snapsho...

Snapshots aren't the only feature of Jest; it is simply one assertion in our assertion library. There is a ton of other stuff in the framework that makes setting up a test environment and writing tests easier.

My philosophy for building a test framework is to build a good feature set to help you out in any situation but the user should be in total control. This allows you to find the best way to test your code which I've found to be extremely subjective. The choice of test framework and test methodology seems almost religious at times and I've deliberately tried to stay away from these conversations.


Indeed, I'm not criticising Jest which most of our developers really like, merely criticising snapshot testing, which is one of the ways we are using jest.

The Ben McCormick post does cover some of the issues with snapshot testing, particularly around lack of communication of developer intent.

If you wrote a unit test that asserted that a tree of data looked exactly a particular way, when the only correctness/incorrectness criteria was whether or not a particular prop was present on one of the branches, then you've written a bad test.

Using a snapshot means that your test fails when your code changes in ways that are still correct. It's a test that invites lots of false negatives, and the reason that is considered acceptable is because it makes it easy to update the test when it predictably gives you the false negative.

Normally writing tests that depend on the exact implementation details of the code under test is considered a bad thing. I can't help but think that there are better solutions to this problem.

Almost immediately after one of our teams started using snapshot testing, we had test breakages because an entirely different component that we used in an ancillary way had a semver minor change (added a new prop that was defaulted) and that broke tests (but not the app) in our component. Perhaps we're doing it wrong, but adding tests to packages that can trivially be incorrectly broken by your dependencies changing is fairly painful (and makes a mockery of semver).


I implemented the snapshot functionality in Jest and I work on Jest at FB.

> Almost immediately after one of our teams started using snapshot testing, we had test breakages because an entirely different component that we used in an ancillary way had a semver minor change (added a new prop that was defaulted) and that broke tests (but not the app) in our component. Perhaps we're doing it wrong, but adding tests to packages that can trivially be incorrectly broken by your dependencies changing is fairly painful (and makes a mockery of semver).

Snapshots should make you more confident when making changes.

In the case you just described they correctly caught that something changed.

Imagine a minor semver update that works in 99% of the cases but has a very tricky edge case scenario they didn't consider when bumping to the new version, and you're super lucky and you have that scenario in your codebase, but only in a single page of your app under a certain condition.

Snapshot failures highlights subtle dependencies between seemingly unrelated components, making you aware of it. I think at snapshot failures as warnings: those are potential breakages and I can now check that everything works, and with a good coverage I know exactly what changed and where, so I can go and check case by case. Once I'm absolutely sure everything is fine I can safely update the snapshots, which as you correctly stated is easy to do, I've just to pass a `-u` flag.


I realise that it can occasionally be useful to have a canary that says 'something has changed' as long as its easy to reset. Personally, I'd rather such a canary operated on the visual representation plus interactions, since that is what the user experiences, rather than error on tree differences that produce no effective difference to the end user.

I do think though that neither of these should replace tests that test the actual postconditions of your code, and that the preponderance of false negatives and ease of resetting the canary discourages developers from spending the effort required to properly understand what is going on.

Almost all programming is about erecting firewalls to contain changes, so that as they ripple out from the change site, they hit boundaries beyond which things no longer need to be modified. As part of that, modules document what you can and cannot rely on. If you have a test that breaks because it's relying on some feature of a modules output that is not considered public behaviour, the bug is in your test, not the module that causes the failure.

> In the case you just described they correctly caught that something changed.

It is not the job of tests to tell you that something changed, but that something broke.

Perhaps I'd have less problem if people talked about them as canaries rather than as tests. 'Test' implies that there's something wrong if you fail.


This doesn't do browser testing (like Karma), it's just an alternative to mocha, right?


It comes with jsDom baked into it for browser behaviour. Using a full headless browser to test with is not ideal, however I trust that you have a good reason to want that. If that is a requirement for you, you can tie jest up to karma :), but I beseech you to have a very good reason to test against a headless browser first :)


Depends on what you want to test, I guess. If you want to test UI widget behaviour (not just business logic), then jsDom probably isn't the right tool and you need a browser. E.g. if you need CSS measurements or more tricky DOM behaviour.

It's not entirely clear to me why you'd want a headless browser though. Once you're already running a browser, you might as well run a full one, and also get the debugging capabilities.


Not out of the box, but the environment setup is easily configurable. I've been using a Protractor for my E2E tests since it also uses Jasmine... does the trick until there is more first-party support.


Still sticking to Jasmine. Simple and efficient and no need for nodejs to run tests.


do we really need MORE testing frameworks? the devs should join jasmine instead


i'm surprised this does not use Yarn




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: