
Pytest 3.2.0 released - happy-go-lucky
https://docs.pytest.org/en/latest/changelog.html#pytest-3-2-0-2017-07-30
======
happy-go-lucky
For those new to Pytest, here's a tutorial to get you started:

Testing Python Applications with Pytest
[https://semaphoreci.com/community/tutorials/testing-
python-a...](https://semaphoreci.com/community/tutorials/testing-python-
applications-with-pytest)

------
giancarlostoro
I'm fairly new to Python and wondering of any reasons to use these as opposed
to the standard unittest package that comes with Python?

~~~
mivade
If for no other reason, pytest requires a lot less boilerplate.

Pytest fixtures are also quite a bit more flexible than unittest's setUp and
tearDown methods, although they take a little while to really understand (at
least they did for me) since they use some magic.

~~~
bpicolo
Yield fixtures are just awesome for mocking, but the magical global dependency
injection definitely makes you feel a bit weird on occasion

~~~
quodlibetor
_especially_ in combination with pytest-mock.

------
jwilk
I've been a long time nosetests user, but it's become unmaintained, so I'd
like to migrate something else. pytest seems to be an obvious choice, but I
really dislike the way it formats test failures. Is there a way to make it use
a more traditional format?

~~~
jeremya
Yes, you can alter the way tracebacks are printed (or not printed) for test
failures:

[https://docs.pytest.org/en/latest/usage.html#modifying-
pytho...](https://docs.pytest.org/en/latest/usage.html#modifying-python-
traceback-printing)

~~~
jwilk
Thanks! _\--tb=native_ combined with _\--color=no_ makes it almost bearable.

It still annoys me that

    
    
      ==================================================================================== FAILURES =====================================================================================
    

takes the whole screen width. I can pipe stdout through "cat" to fix it, but
it's not very convenient.

~~~
variedthoughts
Try -q

------
theptip
I've been pretty disappointed with PyCharm's support for Pytest in a Django
project -- seems to be incapable of overriding the `manage.py` test runner
consistently.

pytest-django has some docs[1] for how to plug into `manage.py`, but they are
broken as of Django 1.10.

Anyone else had this problem?

[1]: [https://github.com/pytest-dev/pytest-
django/blob/master/docs...](https://github.com/pytest-dev/pytest-
django/blob/master/docs/faq.rst#how-can-i-use-managepy-test-with-pytest-
django)

~~~
gegenschall
It is actually discouraged by pytest-django[0] to run it using Django's test
command. Just run `pytest` itself.

[0]: [https://pytest-django.readthedocs.io/en/latest/#why-
would-i-...](https://pytest-django.readthedocs.io/en/latest/#why-would-i-use-
this-instead-of-django-s-manage-py-test-command)

~~~
theptip
Yeah, see part one of my comment -- I've found that PyCharm is pretty bad at
selecting `pytest` as the test runner. It works for test functions, but not
for files/directories.

Perhaps I'm missing something obvious, but PyCharm just uses `./manage.py
test` even when I've selected Py.test as the project test runner.

------
fermigier
The `--last-failed` option will change my life! <3

~~~
masklinn
FWIW it was added back in 2.8.0 (though only as a short `--lf` options). 3.2.0
makes it smarter by improving the first-failed/last-failed clear behaviour: it
now only clears tests which have succeeded.

You can see the use case in the PR: make a change, it breaks a lot of tests,
then fix module par module e.g. after an initial `pytest` (failed lots) fix
`pytest core --lf`, then fix `pytest controller --lf`, … the problem pre-3.2
is that `pytest core --lf` would reset the cache with only its own failures,
so the subsequent `pytest controller —lf` would start with an empty cache and
run every single test rather than only those which had previously failed.

This is problematic when you have a large expensive testing base (e.g. so much
so that you xdist it, as the original PR did)

------
noisy_boy
I really like Pytest but I didn't see a way to easily customize the output
format (not just traceback but overall output). If they can add documentation
with examples of the hooks available (if present) to customize the output, I
won't have anything further to complain about Pytest.

~~~
joaodlf
This is a big one for me, too. Would make working with CI tools much easier.

~~~
luhn
One thing that might help is that pytest can output test results as JUnit XML
files, which many CI services support.

~~~
kstrauser
That's how we wire it into Jenkins.

------
theptip
Any pointers on using fixtures with Django database models?

Seems like it would be useful to be able to manage the lifecycle of fixtures
more explicitly than Django's TestCase allows you to, but this could get
gnarly if the test case transactions were rolling back changes to fixture
objects.

~~~
travisjungroth
Are you using [https://pytest-
django.readthedocs.io/en/latest/](https://pytest-
django.readthedocs.io/en/latest/)?

~~~
theptip
Yup

------
pingpongchef
Pytest's fixtures have ruined me for other testing frameworks.

------
O5vYtytb
Oh boy! `PYTEST_CURRENT_TEST` let me take out a lot of hacks from my code.
We're doing a lot with logging and parallel tests, so this was always a sore
point.

