The dependency injection for fixtures is somewhat of a magical entity, but overall I've found it's the most efficient way to hammer out good tests on the standard unit/integration test spectrum. The default mode of operation doesn't even require importing pytest: Just write files named ending with `_test.py`, functions starting with `test`, and bare bones assertions. `yield_fixture` works well with the typical mock-and-assert-on python test pattern.
Plenty of plugins out there for more exotic features, like asyncio, parallelization (though test parallelization tends to be much more problematic beyond "run tests at the same time"), output to <format>, etc too.
I'll enthusiastically second py.test. It's well documented, well supported, and well thought out. More importantly though, the details are just brilliantly done.
For example: showing the values of local variables in an error traceback always saves me a ton of time. I don't know why this isn't the default option. Alternatively, you can pass --pdb to just run the debugger exactly when a test crashes, so you can inspect the contents.
Another example is how it prints out details on assertions (e.g. "the left was this and the right was this") and not just "AssertionError". On top of that, for complex data structures, it will diff them and point out exactly what parts of the structure are matching and what parts are not matching, which again saves a ton of time from firing up a debugger and doing it by hand.
py.test avoids XUnit or unittest style class-based tests for this reason (though it will run them if you insist). It opts for simple test functions instead. Fixtures are passed in explicitly as parameters rather than implicitly as class members or whatnot. The result of all this is quite similar to what the author proposes.
Also pytest have pytest-bdd[1] for BDD, with that in mind you can use pytest for unit tests and behave tests, with only one runner is just awesome (because you write your fixtures only once)
Also good to mention pytest-xdist[2] for distributed testing...
I've never heard of that term being used but parametrize helps in situations where:
a) You have a slew of test inputs that all need to be tested through the same function, but you don't want to duplicate code.
b) You want something like "test with every combination of [x,y,z] for parameter a and [j,k,l] for parameter b". This is probably what the grandparent is referring to.
c)You have an even more complicated scheme, which you can define.
This is better than just having a for loop over a pre-generated list of parameters and calling a function with the same assertions, because pytest sees and handles them as separate but related test cases (e.g. when reporting errors, crash handling, fixtures, etc)
Yes. But what's striking me is that one woud ever want to test only one input state. If I have to test function f(x) and see if its output is as expected, I always want to test many inputs (including "silly" ones like wrong type, nulls, extremes). Writing one test for each of them is absurd, just give a list of inputs, a list of expected outputs and check them all.
The company I work for uses standard unittest [0] library from Python. This library helps you to test defined functions in your Python program. For each defined function, you can set different cases to test (positive and negative cases). We also use coverage [1] to see code coverage and report. Report is usually reported to HTML because we can check the missing part of code (the flow of our logic program).
I used unittest for ages and put off trying pytest for far too long because "unittest ships with Python, and it's good enough". Then I finally gave pytest a solid chance, and now I'm never going back.
I love, LOVE, L-O-V-E explicitly passing in fixtures to individual tests, rather than trying to going my tests into "stuff that needs this fixture", and "tests that need that setUp instead", and "things that need some of both ClassA's setUp and ClassB's setUp so let's inherit from both". That was a freaking nightmare that I ran into all the time.
Suppose ClassA sets up a database connection and deletes test data out of it afterward. That's kind of expensive, so you only want to use its fixtures when you actually need them. ClassB does the same but for a remote API. You end up clumping all your DB tests under ClassA and all your API tests under ClassB. Fair enough. But over time ClassA also ends up tests that don't actually hit the database, but they're so logically associated with the other tests in there that you toss them in anyway. And ClassB accretes tests like "assert that parameters are valid before hitting the API", and you're setUp'ing and tearDown'ing on those anyway even though you don't strictly need to. Fine, whatever - there aren't that many so it's not terribly bad. But now you want to write some tests that fetch from the DB and make API calls so now you have to subclass ClassA and ClassB so that you get a covering set of fixtures, and invariably they won't play nicely somehow, and everything just got infinitely more complicated. Ugh.
Compare the above with having a "db" fixture and an "api" fixture. Tests that need a database are defined like "def test_fetch_one_record(db)". Tests that need an API look like "def test_get_remote_data(api)". Need to use both? "def test_move_database_to_service(db, api)". The first time I was able to use that pattern in some existing code, I almost cried tears of happiness."
Also, I don't ever want to write "self.assertNotEqual(a, b)" instead of "assert a != b" again, especially when pytest gives much more information about the provenance of both a and b.
To me, using unittest instead of pytest is exactly like using urllib instead of Requests. You might legitimately want to sometimes, but those cases are very few and far between. Most of the time you're just making your life more difficult for no gain.
This. Simple, well documented, works for 99% of test cases. It's also how Django does it [0].
True, it has some warts, and you need to learn about e.g. `autospec`. But for better or worse, it's the "standard" way, and for me that low barrier to entry/universality is a bigger value-add than `assert` vs `self.assertEqual`, etc.
Django has some different constraints than most projects, and the test suite has been around for a long long time. In most situations pyest is a superior choice that goes beyond 'assert' Vs 'self.assert*'.
It's much more pythonic for starters, java-style camelcase is not great to look at.
Fixtures are very powerful, as are parameterized tests. Assertion failures give you a lot of output to help you debug, like local variables and their values.
There are a wide variety of plugins to help you test various frameworks (pytest-django, pytest-asyncio), various other plugins that plug into linters (pytest-flake8, pytest-isort). Super easy parallel tests.
If you need to write something that has as few (or no) external dependencies then the standard unittest library is OK.
All in all it's a bit like using urllib over requests. Sure, both work and get the job done, but one is just nicer.
If you want to really make Python feel like a typed language, and still be idiomatically Pythonic, learn to write constructors that raise ValueError in appropriate ways.
Is there a good way to do that that doesn't involve a bunch of if-this-or-that-then-raise ~boilerplate in your constructors? e.g., is there some nice library that will let me put annotations like "must be a list of at least two items" or "must be an integer between 0 and 99" with a more concise syntax?
Ummm, maybe? There seem to be libraries for everything, but I do it manually. The constructors become "non-trivial", but not overly-complicated, and it pays off in simpler, more robust code elsewhere.
So my code would be something like:
class Foo:
def __init__(self, x, y=None):
if y is None:
x_ = x.x
y = x.y
self.x = int(x_)
if self.x < 0 or self.x > 99:
raise ValueError('x must be in range 0..99')
if len(y) < 2:
raise ValueError('y must be iterable, length > 2')
self.y = y
Soo...
You can say:
f = Foo(1,['a','b'])
f = Foo(1,('a','b', 3))
f = Foo("42", "bar")
f2 = Foo(f)
Calling int(x_) will take anything that implements __int__(), or raise ValueError. len() will take anything that implements the iterator protocol and has length >= 2, or raise. Calling Foo() on an instance of Foo will cause some excess copies, but it does have the effect of making sure you passed in something Foo-ish. Where "Foo-ish" means "has an x attribute that can be made into an integer between 0 and 99, and a y attribute that is an iterable of length >= 2". The above code lets some exceptions bubble up, but they could be wrapped in try..except to turn them into ValueError with specific messages. Note that this code accepts anything Foo-ish without calling isinstance(). Abstract base cases with isinstance() are a more suitable choice for complex cases of checking protocol conformance.
So whether or not the performance impact and extra code are worth it probably varies from project to project.
Also, the next attrs release looks like it's getting some support for PEP484 type hinting annotations, but I don't know the details beyond having glanced at some pull requests. So it might get simpler in the future. If so, that'd make it easy to trust the type-validation, and just have the range check as a custom validator.
That's a different problem, though admittedly I was not very precise. (And by 'feels like typed', I meant feels closer to something like Haskell or OCaml than having eg Java as my target.)
Even the best constructor magic in Python will struggle helping you verify the types of your functions.
Yes, without a test case to sensitize a run time check, no check. Both Haskell and Python have a sane approach to types, stuff in the middle not so much.
Pytest is full of magic and can be horrendous for a newcomer to your team. You can't work out what is happening just by reading the tests, you have to know how pytest works.
Also, when something breaks in pytest, all the magic means that it can be very hard to work out what is happening.
Hypothesis is a worthwhile addition to any Python testing framework. As a 'QuickCheck' implementation it even beats the original Haskell version in lots of respects.
https://docs.pytest.org/en/latest/
The dependency injection for fixtures is somewhat of a magical entity, but overall I've found it's the most efficient way to hammer out good tests on the standard unit/integration test spectrum. The default mode of operation doesn't even require importing pytest: Just write files named ending with `_test.py`, functions starting with `test`, and bare bones assertions. `yield_fixture` works well with the typical mock-and-assert-on python test pattern.
Plenty of plugins out there for more exotic features, like asyncio, parallelization (though test parallelization tends to be much more problematic beyond "run tests at the same time"), output to <format>, etc too.