All problems they're facing are due to the decision to make tests a "special case" rather than just normal Rust code that happens to test things.
While I like Rust's `[#test]` can be placed anywhere and will be found/run by cargo test, I think there should also be something like `[#test_context]` where you receive a handle to the test framework's entry point (a little like Go) and from there you just write code to create tests... this is a stupid simple way to solve every problem mentioned in this post. Dart has something like this, and it's amazing for me to realize that its totally simplistic solution makes everything those complex Java testing frameworks do (test by example cases, skip depending on a function's return value, group tests into sub-tests etc.) not just possible, but easier and more fun to use as it's just "normal" code (your IDE can autocomplete your stuff so you don't need to google every tinme for the magic combination of annotations/parameters/types to use).
Please stop using Java annotation-like stuff for this kind of thing, it just limits what you can do in exchange for looking a little prettier (but being vastly more complex when the whole implementation is taken into consideration).
> All problems they're facing are due to the decision to make tests a "special case" rather than just normal Rust code that happens to test things.
That's one way in which this can be interpreted, but I don't really agree. I think the problems are a classic case of a "good enough" solution that shipped very early and became core of the ecosystem. Now it's hard to move off it or to improve it as it's in a case of stasis.
Even today you do not need to use `#[test]` or the built-in functionality at all. (The exception being that the internals of the print interception are not exposed, but there are other ways around that).
Fundamentally the frustration of the state is not high enough, and nobody pushes strongly for a much improved testing solution. But technology wise, you would not need anything from the language to have a better test system in Rust.
> the problems are a classic case of a "good enough" solution that shipped very early and became core of the ecosystem
BTW, that's the exact reason I like lack of a “good standard library” in JS that people usually cite as a downside. When there's a good enough default solution it suffocates evolution of different ideas in the field.
Do you want the dynamically generated tests reported in discover? If so, then you need to assume that if you pass in the reference that it will only be used for dynamically generating tests and not for helping with skipping of tests or other use cases.
Thanks, these are helpful recommendations. I'm currently working on a Gameboy emulator and making tests with many permutations is very verbose with `#[test_case]`. I'll have to check these out.
> Data driven tests are an easy way to cover a lot of cases (granted, property testing is even better). The most trivial way of doing this is just looping over your cases
> […]
> You don't know which input was being processed on failure (without extra steps)
> Any debug output from prior iterations will flood the display when analyzing a failure
For generating separate tests for different inputs, keeping it easy to see which input failed the test I use the test-case crate
#[cfg(test)]
mod tests {
use test_case::test_case;
#[test_case(-2, -4 ; "when both operands are negative")]
#[test_case(2, 4 ; "when both operands are positive")]
#[test_case(4, 2 ; "when operands are swapped")]
fn multiplication_tests(x: i8, y: i8) {
let actual = (x * y).abs();
assert_eq!(8, actual)
}
}
And then when you run
cargo test
You get this output:
running 3 tests
test tests::multiplication_tests::when_both_operands_are_negative ... ok
test tests::multiplication_tests::when_both_operands_are_positive ... ok
test tests::multiplication_tests::when_operands_are_swapped ... ok
test result: ok. 3 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s
I do like C# test casing better where you take n+1 params. N being n params in function, and return value being the expected value. Very clean
#[test_case(-2, -4, 8 ; "when both operands are negative")]
#[test_case(2, 4, 8 ; "when both operands are positive")]
#[test_case(4, -2, 8 ; "when one operands is negative")]
fn multiplication_tests(x: i8, y: i8) -> i8 {
x * y
}
I think I bring up rstest at some point which supports similar. The downside is it is static. For example, I can't discover tests from the filesystem like trybuild does.
This is pretty good, but since it's static, you can't use parameter generating methods (e.g. for combinatorial input or derived tests). And if you do that within the test, you're back to having to manage those on your own. JUnit&AssertJ really do have advantages on Rust here.
I have to agree, every time I'm writing unit tests in Rust I'm struck by how extremely limited it is compared to testing frameworks in other languages, and how much effort is needed to get some fairly standard stuff like data tests to work. It's definitely something that needs attention.
Is there anything like Python's unittest.mock for Rust? All of my Rust projects are woefully lacking in unit tests because I just find it hard to write tests for stuff that interacts with other stuff. Like in Python, if I want to test that an API integration works, I can just mock out requests, urllib, whatever, and test it that way. But as far as I can tell, I don't really have the tools for that in Rust.
I dont want to tell others how to write code, but if you need to provide a rest api mock to test more than just the thing that does requests, your code could likely be improved a lot by decoupling things.
Let's say you are making a feature which will call into an external API, send an email based on the response and then also write something to the DB.
How do you ensure this actually works and for example emailing isn't broken accidentaly by a future refactor?
Of course that you can and should test each individual component in this scenario, but at some point you need to mock more than "the thing that does requests" and check if it click with other components.
I would argue that when you are at the point of testing that the actual api works as you expect what you need is an integration test not a unit test. And you shouldn't mock it because if you mock it you won't actually be testing what you wanted to test. For unit tests use a Fake for anything that interacts with an external system.
And that's nice because now we have that abstraction layer away from IO. This will feel like overkill in some cases for sure, but I do find it great for a lot of cases. You can even just make that TwilioClientFake inside of the testing area itself directly. Go has done a good job pushing the idea of accepting interfaces everywhere, but it's ad hoc construction of those is a bit leaner.
Sorry for the StackOverflow-esque non-answer though. Also excuse my Rust, it's a bit rusty.
I use the mockall crate myself, which works rather well. It does however mean that you need to generally write your code as traits, and pass them around as traits.
Your specific task sounds like maybe it calls for Wiremock, since it is presumably a third party HTTP API and you'd like to pretend this service has some defined behaviour while testing your integration does what you expect.
I found wiremock to be excellent. What I really struggled with was mocking out other bits of rust code or setting up the DB with the right data and wiping it after the test. All this stuff that just comes for free in other language/test setups you have to manually reimplement.
While I like Rust's `[#test]` can be placed anywhere and will be found/run by cargo test, I think there should also be something like `[#test_context]` where you receive a handle to the test framework's entry point (a little like Go) and from there you just write code to create tests... this is a stupid simple way to solve every problem mentioned in this post. Dart has something like this, and it's amazing for me to realize that its totally simplistic solution makes everything those complex Java testing frameworks do (test by example cases, skip depending on a function's return value, group tests into sub-tests etc.) not just possible, but easier and more fun to use as it's just "normal" code (your IDE can autocomplete your stuff so you don't need to google every tinme for the magic combination of annotations/parameters/types to use).
This is what it may look like in Rust:
Please stop using Java annotation-like stuff for this kind of thing, it just limits what you can do in exchange for looking a little prettier (but being vastly more complex when the whole implementation is taken into consideration).