API Reference

This page contains the full reference to pytest’s API.

Functions

pytest.approx

approx(expected, rel=None, abs=None, nan_ok=False)[source]

Assert that two numbers (or two sets of numbers) are equal to each other within some tolerance.

Due to the intricacies of floating-point arithmetic, numbers that we would intuitively expect to be equal are not always so:

>>> 0.1 + 0.2 == 0.3
False

This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:

>>> abs((0.1 + 0.2) - 0.3) < 1e-6
True

However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations. 1e-6 is good for numbers around 1, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.

The approx class performs floating-point comparisons using a syntax that’s as intuitive as possible:

>>> from pytest import approx
>>> 0.1 + 0.2 == approx(0.3)
True

The same syntax also works for sequences of numbers:

>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
True

Dictionary values:

>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
True

numpy arrays:

>>> import numpy as np                                                          
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) 
True

And for a numpy array against a scalar:

>>> import numpy as np                                         
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) 
True

By default, approx considers numbers within a relative tolerance of 1e-6 (i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was 0.0, because nothing but 0.0 itself is relatively close to 0.0. To handle this case less surprisingly, approx also considers numbers within an absolute tolerance of 1e-12 of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting the nan_ok argument to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)

Both the relative and absolute tolerances can be changed by passing arguments to the approx constructor:

>>> 1.0001 == approx(1)
False
>>> 1.0001 == approx(1, rel=1e-3)
True
>>> 1.0001 == approx(1, abs=1e-3)
True

If you specify abs but not rel, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of 1e-6 will still be considered unequal if they exceed the specified absolute tolerance. If you specify both abs and rel, the numbers will be considered equal if either tolerance is met:

>>> 1 + 1e-8 == approx(1)
True
>>> 1 + 1e-8 == approx(1, abs=1e-12)
False
>>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12)
True

If you’re thinking about using approx, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:

  • math.isclose(a, b, rel_tol=1e-9, abs_tol=0.0): True if the relative tolerance is met w.r.t. either a or b or if the absolute tolerance is met. Because the relative tolerance is calculated w.r.t. both a and b, this test is symmetric (i.e. neither a nor b is a “reference value”). You have to specify an absolute tolerance if you want to compare to 0.0 because there is no tolerance by default. Only available in python>=3.5. More information…

  • numpy.isclose(a, b, rtol=1e-5, atol=1e-8): True if the difference between a and b is less that the sum of the relative tolerance w.r.t. b and the absolute tolerance. Because the relative tolerance is only calculated w.r.t. b, this test is asymmetric and you can think of b as the reference value. Support for comparing sequences is provided by numpy.allclose. More information…

  • unittest.TestCase.assertAlmostEqual(a, b): True if a and b are within an absolute tolerance of 1e-7. No relative tolerance is considered and the absolute tolerance cannot be changed, so this function is not appropriate for very large or very small numbers. Also, it’s only available in subclasses of unittest.TestCase and it’s ugly because it doesn’t follow PEP8. More information…

  • a == pytest.approx(b, rel=1e-6, abs=1e-12): True if the relative tolerance is met w.r.t. b or if the absolute tolerance is met. Because the relative tolerance is only calculated w.r.t. b, this test is asymmetric and you can think of b as the reference value. In the special case that you explicitly specify an absolute tolerance but not a relative tolerance, only the absolute tolerance is considered.

Warning

Changed in version 3.2.

In order to avoid inconsistent behavior, TypeError is raised for >, >=, < and <= comparisons. The example below illustrates the problem:

assert approx(0.1) > 0.1 + 1e-10  # calls approx(0.1).__gt__(0.1 + 1e-10)
assert 0.1 + 1e-10 > approx(0.1)  # calls approx(0.1).__lt__(0.1 + 1e-10)

In the second example one expects approx(0.1).__le__(0.1 + 1e-10) to be called. But instead, approx(0.1).__lt__(0.1 + 1e-10) is used to comparison. This is because the call hierarchy of rich comparisons follows a fixed behavior. More information…

pytest.fail

Tutorial: Skip and xfail: dealing with tests that cannot succeed

fail(msg: str = '', pytrace: bool = True) NoReturn[source]

Explicitly fail an executing test with the given message.

Parameters
  • msg (str) – the message to show the user as reason for the failure.

  • pytrace (bool) – if false the msg represents the full failure information and no python traceback will be reported.

pytest.skip

skip(msg[, allow_module_level=False])[source]

Skip an executing test with the given message.

This function should be called only during testing (setup, call or teardown) or during collection by using the allow_module_level flag. This function can be called in doctests as well.

Parameters

allow_module_level (bool) – allows this function to be called at module level, skipping the rest of the module. Default to False.

Note

It is better to use the pytest.mark.skipif marker when possible to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. Similarly, use the # doctest: +SKIP directive (see doctest.SKIP) to skip a doctest statically.

pytest.importorskip

importorskip(modname: str, minversion: Optional[str] = None, reason: Optional[str] = None) Any[source]

Imports and returns the requested module modname, or skip the current test if the module cannot be imported.

Parameters
  • modname (str) – the name of the module to import

  • minversion (str) – if given, the imported module’s __version__ attribute must be at least this minimal version, otherwise the test is still skipped.

  • reason (str) – if given, this reason is shown as the message when the module cannot be imported.

Returns

The imported module. This should be assigned to its canonical name.

Example:

docutils = pytest.importorskip("docutils")

pytest.xfail

xfail(reason: str = '') NoReturn[source]

Imperatively xfail an executing test or setup functions with the given reason.

This function should be called only during testing (setup, call or teardown).

Note

It is better to use the pytest.mark.xfail marker when possible to declare a test to be xfailed under certain conditions like known bugs or missing features.

pytest.exit

exit(msg: str, returncode: Optional[int] = None) NoReturn[source]

Exit testing process.

Parameters
  • msg (str) – message to display upon exit.

  • returncode (int) – return code to be used when exiting pytest.

pytest.main

main(args=None, plugins=None) Union[int, _pytest.config.ExitCode][source]

return exit code, after performing an in-process test run.

Parameters
  • args – list of command line arguments.

  • plugins – list of plugin objects to be auto-registered during initialization.

pytest.param

param(*values[, id][, marks])[source]

Specify a parameter in pytest.mark.parametrize calls or parametrized fixtures.

@pytest.mark.parametrize(
    "test_input,expected",
    [("3+5", 8), pytest.param("6*9", 42, marks=pytest.mark.xfail),],
)
def test_eval(test_input, expected):
    assert eval(test_input) == expected
Parameters
  • values – variable args of the values of the parameter set, in order.

  • marks – a single mark or a list of marks to be applied to this parameter set.

  • id (str) – the id to attribute to this parameter set.

pytest.raises

Tutorial: Assertions about expected exceptions.

with raises(expected_exception: Exception[, *, match]) as excinfo[source]

Assert that a code block/function call raises expected_exception or raise a failure exception otherwise.

Parameters

match

if specified, a string containing a regular expression, or a regular expression object, that is tested against the string representation of the exception using re.search. To match a literal string that may contain special characters, the pattern can first be escaped with re.escape.

(This is only used when pytest.raises is used as a context manager, and passed through to the function otherwise. When using pytest.raises as a function, you can use: pytest.raises(Exc, func, match="passed on").match("my pattern").)

Use pytest.raises as a context manager, which will capture the exception of the given type:

>>> with raises(ZeroDivisionError):
...    1/0

If the code block does not raise the expected exception (ZeroDivisionError in the example above), or no exception at all, the check will fail instead.

You can also use the keyword argument match to assert that the exception matches a text or regex:

>>> with raises(ValueError, match='must be 0 or None'):
...     raise ValueError("value must be 0 or None")

>>> with raises(ValueError, match=r'must be \d+$'):
...     raise ValueError("value must be 42")

The context manager produces an ExceptionInfo object which can be used to inspect the details of the captured exception:

>>> with raises(ValueError) as exc_info:
...     raise ValueError("value must be 42")
>>> assert exc_info.type is ValueError
>>> assert exc_info.value.args[0] == "value must be 42"

Note

When using pytest.raises as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:

>>> value = 15
>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...     assert exc_info.type is ValueError  # this will not execute

Instead, the following approach must be taken (note the difference in scope):

>>> with raises(ValueError) as exc_info:
...     if value > 10:
...         raise ValueError("value must be <= 10")
...
>>> assert exc_info.type is ValueError

Using with pytest.mark.parametrize

When using pytest.mark.parametrize it is possible to parametrize tests such that some runs raise an exception and others do not.

See Parametrizing conditional raising for an example.

Legacy form

It is possible to specify a callable by passing a to-be-called lambda:

>>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...>

or you can specify an arbitrary callable with arguments:

>>> def f(x): return 1/x
...
>>> raises(ZeroDivisionError, f, 0)
<ExceptionInfo ...>
>>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...>

The form above is fully supported but discouraged for new code because the context manager form is regarded as more readable and less error-prone.

Note

Similar to caught exception objects in Python, explicitly clearing local references to returned ExceptionInfo objects can help the Python interpreter speed up its garbage collection.

Clearing those references breaks a reference cycle (ExceptionInfo –> caught exception –> frame stack raising the exception –> current frame stack –> local variables –> ExceptionInfo) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. More detailed information can be found in the official Python documentation for the try statement.

pytest.deprecated_call

Tutorial: Ensuring code triggers a deprecation warning.

with deprecated_call()[source]

context manager that can be used to ensure a block of code triggers a DeprecationWarning or PendingDeprecationWarning:

>>> import warnings
>>> def api_call_v2():
...     warnings.warn('use v3 of this api', DeprecationWarning)
...     return 200

>>> with deprecated_call():
...    assert api_call_v2() == 200

deprecated_call can also be used by passing a function and *args and *kwargs, in which case it will ensure calling func(*args, **kwargs) produces one of the warnings types above.

pytest.register_assert_rewrite

Tutorial: Assertion Rewriting.

register_assert_rewrite(*names) None[source]

Register one or more module names to be rewritten on import.

This function will make sure that this module or all modules inside the package will get their assert statements rewritten. Thus you should make sure to call this before the module is actually imported, usually in your __init__.py if you are a plugin using a package.

Raises

TypeError – if the given module names are not strings.

pytest.warns

Tutorial: Asserting warnings with the warns function

with warns(expected_warning: Exception[, match])[source]

Assert that code raises a particular class of warning.

expected_warning can be a warning class or sequence of warning classes, which are expected to be issued inside of the with block.

This helper produces a list of warnings.WarningMessage objects, one for each warning raised.

This function can be used as a context manager, or any of the other ways pytest.raises can be used:

>>> with warns(RuntimeWarning):
...    warnings.warn("my warning", RuntimeWarning)

In the context manager form the keyword argument match can be used to assert that the warning message matches the given regular expression (using re.search()):

>>> with warns(UserWarning, match='must be 0 or None'):
...     warnings.warn("value must be 0 or None", UserWarning)

>>> with warns(UserWarning, match=r'must be \d+$'):
...     warnings.warn("value must be 42", UserWarning)

>>> with warns(UserWarning, match=r'must be \d+$'):
...     warnings.warn("this is not here", UserWarning)
Traceback (most recent call last):
  ...
_pytest.outcomes.Failed: DID NOT WARN. No warning of type ...UserWarning... was emitted...

pytest.freeze_includes

Tutorial: Freezing pytest.

freeze_includes()[source]

Returns a list of module names used by pytest that should be included by cx_freeze.

Marks

Marks can be used apply meta data to test functions (but not fixtures), which can then be accessed by fixtures or plugins.

pytest.mark.filterwarnings

Tutorial: @pytest.mark.filterwarnings.

Add warning filters to marked test items.

pytest.mark.filterwarnings(filter)
Parameters

filter (str) –

A warning specification string, which is composed of contents of the tuple (action, message, category, module, lineno) as specified in The Warnings filter section of the Python documentation, separated by ":". Optional fields can be omitted. Module names passed for filtering are not regex-escaped.

For example:

@pytest.mark.warnings("ignore:.*usage will be deprecated.*:DeprecationWarning")
def test_foo():
    ...

pytest.mark.parametrize

Tutorial: Parametrizing fixtures and test functions.

Metafunc.parametrize(argnames: Union[str, List[str], Tuple[str, ...]], argvalues: Iterable[Union[_pytest.mark.structures.ParameterSet, Sequence[object], object]], indirect: Union[bool, Sequence[str]] = False, ids: Optional[Union[Iterable[Union[None, str, float, int, bool]], Callable[[object], Optional[object]]]] = None, scope: Optional[str] = None, *, _param_mark: Optional[_pytest.mark.structures.Mark] = None) None[source]

Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather at test setup time.

Parameters
  • argnames – a comma-separated string denoting one or more argument names, or a list/tuple of argument strings.

  • argvalues – The list of argvalues determines how often a test is invoked with different argument values. If only one argname was specified argvalues is a list of values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname.

  • indirect – The list of argnames or boolean. A list of arguments’ names (subset of argnames). If True the list contains all names from the argnames. Each argvalue corresponding to an argname in this list will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.

  • ids

    sequence of (or generator for) ids for argvalues,

    or a callable to return part of the id for each argvalue.

    With sequences (and generators like itertools.count()) the returned ids should be of type string, int, float, bool, or None. They are mapped to the corresponding index in argvalues. None means to use the auto-generated id.

    If it is a callable it will be called for each entry in argvalues, and the return value is used as part of the auto-generated id for the whole set (where parts are joined with dashes (“-“)). This is useful to provide more specific ids for certain items, e.g. dates. Returning None will use an auto-generated id.

    If no ids are provided they will be generated automatically from the argvalues.

  • scope – if specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.

pytest.mark.skip

Tutorial: Skipping test functions.

Unconditionally skip a test function.

pytest.mark.skip(*, reason=None)
Parameters

reason (str) – Reason why the test function is being skipped.

pytest.mark.skipif

Tutorial: Skipping test functions.

Skip a test function if a condition is True.

pytest.mark.skipif(condition, *, reason=None)
Parameters
  • condition (bool or str) – True/False if the condition should be skipped or a condition string.

  • reason (str) – Reason why the test function is being skipped.

pytest.mark.usefixtures

Tutorial: Using fixtures from classes, modules or projects.

Mark a test function as using the given fixture names.

Warning

This mark has no effect when applied to a fixture function.

pytest.mark.usefixtures(*names)
Parameters

args – the names of the fixture to use, as strings

pytest.mark.xfail

Tutorial: XFail: mark test functions as expected to fail.

Marks a test function as expected to fail.

pytest.mark.xfail(condition=None, *, reason=None, raises=None, run=True, strict=False)
Parameters
  • condition (bool or str) – Condition for marking the test function as xfail (True/False or a condition string).

  • reason (str) – Reason why the test function is marked as xfail.

  • raises (Exception) – Exception subclass expected to be raised by the test function; other exceptions will fail the test.

  • run (bool) – If the test function should actually be executed. If False, the function will always xfail and will not be executed (useful if a function is segfaulting).

  • strict (bool) –

    • If False (the default) the function will be shown in the terminal output as xfailed if it fails and as xpass if it passes. In both cases this will not cause the test suite to fail as a whole. This is particularly useful to mark flaky tests (tests that fail at random) to be tackled later.

    • If True, the function will be shown in the terminal output as xfailed if it fails, but if it unexpectedly passes then it will fail the test suite. This is particularly useful to mark functions that are always failing and there should be a clear indication if they unexpectedly start to pass (for example a new release of a library fixes a known bug).

custom marks

Marks are created dynamically using the factory object pytest.mark and applied as a decorator.

For example:

@pytest.mark.timeout(10, "slow", method="thread")
def test_function():
    ...

Will create and attach a Mark object to the collected Item, which can then be accessed by fixtures or hooks with Node.iter_markers. The mark object will have the following attributes:

mark.args == (10, "slow")
mark.kwargs == {"method": "thread"}

Fixtures

Tutorial: pytest fixtures: explicit, modular, scalable.

Fixtures are requested by test functions or other fixtures by declaring them as argument names.

Example of a test requiring a fixture:

def test_output(capsys):
    print("hello")
    out, err = capsys.readouterr()
    assert out == "hello\n"

Example of a fixture requiring another fixture:

@pytest.fixture
def db_session(tmpdir):
    fn = tmpdir / "db.file"
    return connect(str(fn))

For more details, consult the full fixtures docs.

@pytest.fixture

@fixture(callable_or_scope=None, *args, scope='function', params=None, autouse=False, ids=None, name=None)[source]

Decorator to mark a fixture factory function.

This decorator can be used, with or without parameters, to define a fixture function.

The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use the pytest.mark.usefixtures(fixturename) marker.

Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.

Fixtures can provide their values to test functions using return or yield statements. When using yield the code block after the yield statement is executed as teardown code regardless of the test outcome, and must yield exactly once.

Parameters
  • scope

    the scope for which this fixture is shared, one of "function" (default), "class", "module", "package" or "session" ("package" is considered experimental at this time).

    This parameter may also be a callable which receives (fixture_name, config) as parameters, and must return a str with one of the values mentioned above.

    See Dynamic scope in the docs for more information.

  • params – an optional list of parameters which will cause multiple invocations of the fixture function and all of the tests using it. The current parameter is available in request.param.

  • autouse – if True, the fixture func is activated for all tests that can see it. If False (the default) then an explicit reference is needed to activate the fixture.

  • ids – list of string ids each corresponding to the params so that they are part of the test id. If no ids are provided they will be generated automatically from the params.

  • name – the name of the fixture. This defaults to the name of the decorated function. If a fixture is used in the same module in which it is defined, the function name of the fixture will be shadowed by the function arg that requests the fixture; one way to resolve this is to name the decorated function fixture_<fixturename> and then use @pytest.fixture(name='<fixturename>').

config.cache

Tutorial: Cache: working with cross-testrun state.

The config.cache object allows other plugins and fixtures to store and retrieve values across test runs. To access it from fixtures request pytestconfig into your fixture and get it with pytestconfig.cache.

Under the hood, the cache plugin uses the simple dumps/loads API of the json stdlib module.

Cache.get(key, default)[source]

return cached value for the given key. If no value was yet cached or the value cannot be read, the specified default is returned.

Parameters
  • key – must be a / separated value. Usually the first name is the name of your plugin or your application.

  • default – must be provided in case of a cache-miss or invalid cache values.

Cache.set(key, value)[source]

save value for the given key.

Parameters
  • key – must be a / separated value. Usually the first name is the name of your plugin or your application.

  • value – must be of any combination of basic python types, including nested types like e. g. lists of dictionaries.

Cache.makedir(name)[source]

return a directory path object with the given name. If the directory does not yet exist, it will be created. You can use it to manage files likes e. g. store/retrieve database dumps across test sessions.

Parameters

name – must be a string not containing a / separator. Make sure the name contains your plugin or application identifiers to prevent clashes with other cache users.

capsys

Tutorial: Capturing of the stdout/stderr output.

capsys()[source]

Enable text capturing of writes to sys.stdout and sys.stderr.

The captured output is made available via capsys.readouterr() method calls, which return a (out, err) namedtuple. out and err will be text objects.

Returns an instance of CaptureFixture.

Example:

def test_output(capsys):
    print("hello")
    captured = capsys.readouterr()
    assert captured.out == "hello\n"
class CaptureFixture[source]

Object returned by capsys(), capsysbinary(), capfd() and capfdbinary() fixtures.

readouterr()[source]

Read and return the captured output so far, resetting the internal buffer.

Returns

captured content as a namedtuple with out and err string attributes

with disabled()[source]

Temporarily disables capture while inside the ‘with’ block.

capsysbinary

Tutorial: Capturing of the stdout/stderr output.

capsysbinary()[source]

Enable bytes capturing of writes to sys.stdout and sys.stderr.

The captured output is made available via capsysbinary.readouterr() method calls, which return a (out, err) namedtuple. out and err will be bytes objects.

Returns an instance of CaptureFixture.

Example:

def test_output(capsysbinary):
    print("hello")
    captured = capsysbinary.readouterr()
    assert captured.out == b"hello\n"

capfd

Tutorial: Capturing of the stdout/stderr output.

capfd()[source]

Enable text capturing of writes to file descriptors 1 and 2.

The captured output is made available via capfd.readouterr() method calls, which return a (out, err) namedtuple. out and err will be text objects.

Returns an instance of CaptureFixture.

Example:

def test_system_echo(capfd):
    os.system('echo "hello"')
    captured = capfd.readouterr()
    assert captured.out == "hello\n"

capfdbinary

Tutorial: Capturing of the stdout/stderr output.

capfdbinary()[source]

Enable bytes capturing of writes to file descriptors 1 and 2.

The captured output is made available via capfd.readouterr() method calls, which return a (out, err) namedtuple. out and err will be byte objects.

Returns an instance of CaptureFixture.

Example:

def test_system_echo(capfdbinary):
    os.system('echo "hello"')
    captured = capfdbinary.readouterr()
    assert captured.out == b"hello\n"

doctest_namespace

Tutorial: Doctest integration for modules and test files.

doctest_namespace()[source]

Fixture that returns a dict that will be injected into the namespace of doctests.

Usually this fixture is used in conjunction with another autouse fixture:

@pytest.fixture(autouse=True)
def add_np(doctest_namespace):
    doctest_namespace["np"] = numpy

For more details: ‘doctest_namespace’ fixture.

request

Tutorial: Pass different values to a test function, depending on command line options.

The request fixture is a special fixture providing information of the requesting test function.

class FixtureRequest[source]

A request for a fixture from a test or fixture function.

A request object gives access to the requesting test context and has an optional param attribute in case the fixture is parametrized indirectly.

fixturename

fixture for which this request is being performed

scope

Scope string, one of “function”, “class”, “module”, “session”

property fixturenames

names of all active fixtures in this request

property funcargnames

alias attribute for fixturenames for pre-2.3 compatibility

property node

underlying collection node (depends on current request scope)

property config

the pytest config object associated with this request.

property function

test function object if the request has a per-function scope.

property cls

class (can be None) where the test function was collected.

property instance

instance (can be None) on which test function was collected.

property module

python module object where the test function was collected.

property fspath

the file system path of the test module which collected this test.

property keywords

keywords/markers dictionary for the underlying node.

property session

pytest session object.

addfinalizer(finalizer)[source]

add finalizer/teardown function to be called after the last test within the requesting test context finished execution.

applymarker(marker)[source]

Apply a marker to a single test function invocation. This method is useful if you don’t want to have a keyword/marker on all function invocations.

Parameters

marker – a _pytest.mark.MarkDecorator object created by a call to pytest.mark.NAME(...).

raiseerror(msg)[source]

raise a FixtureLookupError with the given message.

getfixturevalue(argname)[source]

Dynamically run a named fixture function.

Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.

pytestconfig

pytestconfig()[source]

Session-scoped fixture that returns the _pytest.config.Config object.

Example:

def test_foo(pytestconfig):
    if pytestconfig.getoption("verbose") > 0:
        ...

record_property

Tutorial: record_property.

record_property()[source]

Add an extra properties the calling test. User properties become part of the test report and are available to the configured reporters, like JUnit XML. The fixture is callable with (name, value), with value being automatically xml-encoded.

Example:

def test_function(record_property):
    record_property("example_key", 1)

record_testsuite_property

Tutorial: record_testsuite_property.

record_testsuite_property()[source]

Records a new <property> tag as child of the root <testsuite>. This is suitable to writing global information regarding the entire test suite, and is compatible with xunit2 JUnit family.

This is a session-scoped fixture which is called with (name, value). Example:

def test_foo(record_testsuite_property):
    record_testsuite_property("ARCH", "PPC")
    record_testsuite_property("STORAGE_TYPE", "CEPH")

name must be a string, value will be converted to a string and properly xml-escaped.

caplog

Tutorial: Logging.

caplog()[source]

Access and control log capturing.

Captured logs are available through the following properties/methods:

* caplog.messages        -> list of format-interpolated log messages
* caplog.text            -> string containing formatted log output
* caplog.records         -> list of logging.LogRecord instances
* caplog.record_tuples   -> list of (logger_name, level, message) tuples
* caplog.clear()         -> clear captured records and formatted log output string

This returns a _pytest.logging.LogCaptureFixture instance.

class LogCaptureFixture(item)[source]

Provides access and control of log capturing.

property handler: _pytest.logging.LogCaptureHandler
Return type

LogCaptureHandler

get_records(when: str) List[logging.LogRecord][source]

Get the logging records for one of the possible test phases.

Parameters

when (str) – Which test phase to obtain the records from. Valid values are: “setup”, “call” and “teardown”.

Return type

List[logging.LogRecord]

Returns

the list of captured records at the given stage

New in version 3.4.

property text

Returns the formatted log text.

property records

Returns the list of log records.

property record_tuples

Returns a list of a stripped down version of log records intended for use in assertion comparison.

The format of the tuple is:

(logger_name, log_level, message)

property messages

Returns a list of format-interpolated log messages.

Unlike ‘records’, which contains the format string and parameters for interpolation, log messages in this list are all interpolated. Unlike ‘text’, which contains the output from the handler, log messages in this list are unadorned with levels, timestamps, etc, making exact comparisons more reliable.

Note that traceback or stack info (from logging.exception() or the exc_info or stack_info arguments to the logging functions) is not included, as this is added by the formatter in the handler.

New in version 3.7.

clear()[source]

Reset the list of log records and the captured log text.

set_level(level, logger=None)[source]

Sets the level for capturing of logs. The level will be restored to its previous value at the end of the test.

Parameters
  • level (int) – the logger to level.

  • logger (str) – the logger to update the level. If not given, the root logger level is updated.

Changed in version 3.4: The levels of the loggers changed by this function will be restored to their initial values at the end of the test.

with at_level(level, logger=None)[source]

Context manager that sets the level for capturing of logs. After the end of the ‘with’ statement the level is restored to its original value.

Parameters
  • level (int) – the logger to level.

  • logger (str) – the logger to update the level. If not given, the root logger level is updated.

monkeypatch

Tutorial: Monkeypatching/mocking modules and environments.

monkeypatch()[source]

The returned monkeypatch fixture provides these helper methods to modify objects, dictionaries or os.environ:

monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
monkeypatch.delitem(obj, name, raising=True)
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, raising=True)
monkeypatch.syspath_prepend(path)
monkeypatch.chdir(path)

All modifications will be undone after the requesting test function or fixture has finished. The raising parameter determines if a KeyError or AttributeError will be raised if the set/deletion operation has no target.

This returns a MonkeyPatch instance.

class MonkeyPatch[source]

Object returned by the monkeypatch fixture keeping a record of setattr/item/env/syspath changes.

with context() Generator[_pytest.monkeypatch.MonkeyPatch, None, None][source]

Context manager that returns a new MonkeyPatch object which undoes any patching done inside the with block upon exit:

import functools


def test_partial(monkeypatch):
    with monkeypatch.context() as m:
        m.setattr(functools, "partial", 3)

Useful in situations where it is desired to undo some patches before the test ends, such as mocking stdlib functions that might break pytest itself if mocked (for examples of this see #3290.

setattr(target, name, value=<notset>, raising=True)[source]

Set attribute value on target, memorizing the old value. By default raise AttributeError if the attribute did not exist.

For convenience you can specify a string as target which will be interpreted as a dotted import path, with the last part being the attribute name. Example: monkeypatch.setattr("os.getcwd", lambda: "/") would set the getcwd function of the os module.

The raising value determines if the setattr should fail if the attribute is not already present (defaults to True which means it will raise).

delattr(target, name=<notset>, raising=True)[source]

Delete attribute name from target, by default raise AttributeError it the attribute did not previously exist.

If no name is specified and target is a string it will be interpreted as a dotted import path with the last part being the attribute name.

If raising is set to False, no exception will be raised if the attribute is missing.

setitem(dic, name, value)[source]

Set dictionary entry name to value.

delitem(dic, name, raising=True)[source]

Delete name from dict. Raise KeyError if it doesn’t exist.

If raising is set to False, no exception will be raised if the key is missing.

setenv(name: str, value: Optional[str], prepend: Optional[str] = None) None[source]

Set environment variable name to value. If prepend is a character, read the current environment variable value and prepend the value adjoined with the prepend character.

A value of None unsets it, which is useful as a shortcut with parametrization.

delenv(name: str, raising: bool = True) None[source]

Delete name from the environment. Raise KeyError if it does not exist.

If raising is set to False, no exception will be raised if the environment variable is missing.

mockimport(mocked_imports: Union[str, Sequence[str]], err: Union[function, Type[BaseException]] = <class 'ImportError'>) None[source]

Mock import with given error to be raised, or callable.

The callable gets called instead of __import__().

This is considered to be experimental.

syspath_prepend(path)[source]

Prepend path to sys.path list of import locations.

chdir(path)[source]

Change the current working directory to the specified path. Path can be a string or a py.path.local object.

undo()[source]

Undo previous changes. This call consumes the undo stack. Calling it a second time has no effect unless you do more monkeypatching after the undo call.

There is generally no need to call undo(), since it is called automatically during tear-down.

Note that the same monkeypatch fixture is used across a single test function invocation. If monkeypatch is used both by the test function itself and one of the test fixtures, calling undo() will undo all of the changes made in both functions.

testdir

This fixture provides a Testdir instance useful for black-box testing of test files, making it ideal to test plugins.

To use it, include in your top-most conftest.py file:

pytest_plugins = "pytester"
class Testdir(request: _pytest.fixtures.FixtureRequest, tmpdir_factory: _pytest.tmpdir.TempdirFactory)[source]

Temporary test directory with tools to test/run pytest itself.

This is based on the tmpdir fixture, but provides a number of methods which aid with testing pytest itself. Unless chdir() is used all methods will use tmpdir as their current working directory.

CLOSE_STDIN = 1

Sentinel to close stdin.

exception TimeoutExpired[source]
tmpdir

The base temporary directory.

Type

py.path.local

plugins

A list of plugins to use with parseconfig() and runpytest().

Initially this is an empty list but plugins can be added to the list. The type of items to add to the list depends on the method using them so refer to them for details.

finalize()[source]

Clean up global state artifacts.

Some methods modify the global interpreter state and this tries to clean this up. It does not remove the temporary directory however so it can be looked at after the test run has finished.

make_hook_recorder(pluginmanager)[source]

Create a new HookRecorder for a PluginManager.

chdir()[source]

Cd into the temporary directory.

This is done automatically upon instantiation.

makefile(ext, *args, **kwargs)[source]

Create new file(s) in the testdir.

Parameters
  • ext (str) – The extension the file(s) should use, including the dot, e.g. .py.

  • args (list[str]) – All args will be treated as strings and joined using newlines. The result will be written as contents to the file. The name of the file will be based on the test function requesting this fixture.

  • kwargs – Each keyword is the name of a file, while the value of it will be written as contents of the file.

Examples:

testdir.makefile(".txt", "line1", "line2")

testdir.makefile(".ini", pytest="[pytest]\naddopts=-rs\n")

See also makefiles().

makeconftest(source)[source]

Write a contest.py file with ‘source’ as contents.

makeini(source)[source]

Write a tox.ini file with ‘source’ as contents.

getinicfg(source)[source]

Return the pytest section from the tox.ini config file.

makepyfile(*args, **kwargs) py._path.local.LocalPath[source]

Shortcut for .makefile() with a .py extension.

maketxtfile(*args, **kwargs)[source]

Shortcut for .makefile() with a .txt extension.

makefiles(files: Union[Mapping[str, str], Sequence[Tuple[str, str]]], *, base_path: Optional[str] = None, dedent: bool = True, strip_outer_newlines: bool = True, clobber: bool = False) List[pathlib.Path][source]

Create the given set of files.

This is a more straight-forward API than the other helpers (e.g. makepyfile()).

Parameters
  • files (Mapping[str,str]|Sequence[Tuple[str,str]]) –

    Mapping of filenames to file contents, or a sequence with filename/contents pairs.

    Absolute paths are handled, but have to be inside of tmpdir.

  • base_path – Optional base path for relative paths (defaults to current working directory).

  • dedent (bool) – Dedent the contents (via textwrap.dedent()).

  • strip_outer_newlines (bool) – Strip leading and trailing newlines from contents.

  • clobber (bool) –

    Overwrite existing files or (dangling) symlinks.

    (Dangling) symlinks are replaced with regular files.

Returns

List[_pytest.pathlib.Path]

syspathinsert(path=None)[source]

Prepend a directory to sys.path, defaults to tmpdir.

This is undone automatically at the end of each test.

mkdir(name)[source]

Create a new (sub)directory.

mkpydir(name)[source]

Create a new python package.

This creates a (sub)directory with an empty __init__.py file so it gets recognised as a python package.

copy_example(name=None)[source]

Copy file from project’s directory into the testdir.

Parameters

name (str) – The name of the file to copy.

Returns

path to the copied directory (inside self.tmpdir).

getnode(config, arg)[source]

Return the collection node of a file.

Parameters
getpathnode(path)[source]

Return the collection node of a file.

This is like getnode() but uses parseconfigure() to create the (configured) pytest Config instance.

Parameters

path – a py.path.local instance of the file

genitems(colitems)[source]

Generate all test items from a collection node.

This recurses into the collection node and returns a list of all the test items contained within.

runitem(source)[source]

Run the “test_func” Item.

The calling test instance (class containing the test method) must provide a .getrunner() method which should return a runner which can run the test protocol for a single item, e.g. _pytest.runner.runtestprotocol().

inline_runsource(source, *cmdlineargs)[source]

Run a test module in process using pytest.main().

This run writes “source” into a temporary file and runs pytest.main() on it, returning a HookRecorder instance for the result.

Parameters
  • source – the source code of the test module

  • cmdlineargs – any extra command line arguments to use

Returns

HookRecorder instance of the result

inline_genitems(*args) Tuple[List[_pytest.nodes.Item], _pytest.pytester.HookRecorder][source]

Run pytest.main(['--collectonly']) in-process.

Runs the pytest.main() function to run all of pytest inside the test process itself like inline_run(), but returns a tuple of the collected items and a HookRecorder instance.

inline_run(*args, plugins=(), no_reraise_ctrlc: bool = False)[source]

Run pytest.main() in-process, returning a HookRecorder.

Runs the pytest.main() function to run all of pytest inside the test process itself. This means it can return a HookRecorder instance which gives more detailed results from that run than can be done by matching stdout/stderr from runpytest().

Parameters
  • args – command line arguments to pass to pytest.main()

  • plugins – extra plugin instances the pytest.main() instance should use.

  • no_reraise_ctrlc – typically we reraise keyboard interrupts from the child run. If True, the KeyboardInterrupt exception is captured.

Returns

a HookRecorder instance

runpytest_inprocess(*args, tty=None, **kwargs) _pytest.pytester.RunResult[source]

Return result of running pytest in-process, providing a similar interface to what self.runpytest() provides.

runpytest(*args_, tty=None, **kwargs) _pytest.pytester.RunResult[source]

Run pytest inline or in a subprocess, depending on the command line option “–runpytest” and return a RunResult.

parseconfig(*args_: Union[str, py._path.local.LocalPath]) _pytest.config.Config[source]

Return a new pytest Config instance from given commandline args.

This invokes the pytest bootstrapping code in _pytest.config to create a new _pytest.core.PluginManager and call the pytest_cmdline_parse hook to create a new _pytest.config.Config instance.

If plugins has been populated they should be plugin modules to be registered with the PluginManager.

parseconfigure(*args)[source]

Return a new pytest configured Config instance.

This returns a new _pytest.config.Config instance like parseconfig(), but also calls the pytest_configure hook.

getitem(source, funcname='test_func')[source]

Return the test item for a test function.

This writes the source to a python file and runs pytest’s collection on the resulting module, returning the test item for the requested function name.

Parameters
  • source – the module source

  • funcname – the name of the test function for which to return a test item

getitems(source)[source]

Return all test items collected from the module.

This writes the source to a python file and runs pytest’s collection on the resulting module, returning all test items contained within.

getmodulecol(source, configargs=(), withinit=False)[source]

Return the module collection node for source.

This writes source to a file using makepyfile() and then runs the pytest collection on it, returning the collection node for the test module.

Parameters
  • source – the source code of the module to collect

  • configargs – any extra arguments to pass to parseconfigure()

  • withinit – whether to also write an __init__.py file to the same directory to ensure it is a package

collect_by_name(modcol: _pytest.python.Module, name: str) Optional[Union[_pytest.nodes.Item, _pytest.nodes.Collector]][source]

Return the collection node for name from the module collection.

This will search a module collection node for a collection node matching the given name.

Parameters
  • modcol – a module collection node; see getmodulecol()

  • name – the name of the node to return

popen(cmdargs, stdout: Optional[Union[int, IO]] = - 1, stderr: Optional[Union[int, IO]] = - 1, stdin: Optional[Union[_pytest.capture.CloseStdinType, bytes, str, int, IO]] = CloseStdinType.CLOSE_STDIN, *, encoding: Optional[str] = None, **kw) Union[subprocess.Popen[bytes], subprocess.Popen[str]][source]

Invoke subprocess.Popen.

This calls subprocess.Popen making sure the current working directory is in the PYTHONPATH.

encoding is only supported with Python 3.6+.

You probably want to use run() instead.

run(*cmdargs, timeout=None, stdin: Optional[Union[_pytest.capture.CloseStdinType, bytes, str, int, IO]] = CloseStdinType.CLOSE_STDIN) _pytest.pytester.RunResult[source]

Run a command with arguments.

Run a process using :class:<python:subprocess.Popen> saving the stdout and stderr.

Parameters
  • args – the sequence of arguments to pass to subprocess.Popen()

  • timeout – the period in seconds after which to timeout and raise Testdir.TimeoutExpired

  • stdin

    optional standard input. Bytes/strings are being send, closing the pipe, otherwise it is passed through to popen.

    Defaults to CLOSE_STDIN, which translates to using a pipe (subprocess.PIPE) that gets closed.

Returns a RunResult.

runpython(script) _pytest.pytester.RunResult[source]

Run a python script using sys.executable as interpreter.

Returns a RunResult.

runpython_c(command)[source]

Run python -c “command”, return a RunResult.

runpytest_subprocess(*args, stdin=CloseStdinType.CLOSE_STDIN, timeout=None) _pytest.pytester.RunResult[source]

Run pytest as a subprocess with given arguments.

Any plugins added to the plugins list will be added using the -p command line option. Additionally --basetemp is used to put any temporary files and directories in a numbered directory prefixed with “runpytest-” to not conflict with the normal numbered pytest location for temporary files and directories.

Parameters
  • args – the sequence of arguments to pass to the pytest subprocess

  • timeout – the period in seconds after which to timeout and raise Testdir.TimeoutExpired

  • stdin – optional standard input. Passed through to run().

Returns a RunResult.

spawn_pytest(*args: str, **kwargs) pexpect.spawn[source]

Run pytest using pexpect.

This makes sure to use the right pytest and sets up the temporary directory locations.

The pexpect child is returned.

spawn(*args: str, **kwargs) pexpect.spawn[source]

Run a command using pexpect.

The pexpect child is returned.

class RunResult[source]

The result of running a command.

Attributes:

Variables
  • ret – the return value

  • outlines – list of lines captured from stdout

  • errlines – list of lines captured from stderr

  • stdoutLineMatcher of stdout, use stdout.str() to reconstruct stdout or the commonly used stdout.fnmatch_lines() method

  • stderrLineMatcher of stderr

  • duration – duration in seconds

parseoutcomes() Dict[str, int][source]

Return a dictionary of outcomestring->num from parsing the terminal output that the test process produced.

assert_outcomes(passed: int = 0, skipped: int = 0, failed: int = 0, error: int = 0, xpassed: int = 0, xfailed: int = 0) None[source]

Assert that the specified outcomes appear with the respective numbers (0 means it didn’t occur) in the text output from a test run.

class LineMatcher[source]

Flexible matching of text.

This is a convenience class to test large texts like the output of commands.

Parameters

lines (List[str]) – a list of lines without their trailing newlines, e.g. from text.splitlines().

fnmatch_lines_random(lines2: Sequence[str]) None[source]

Check lines exist in the output in any order (using fnmatch.fnmatch()).

re_match_lines_random(lines2: Sequence[str]) None[source]

Check lines exist in the output in any order (using re.match()).

get_lines_after(fnline: str) Sequence[str][source]

Return all lines following the given line in the text.

The given line can contain glob wildcards.

fnmatch_lines(lines2: Sequence[str], *, consecutive: bool = False, complete: bool = False) None[source]

Check lines exist in the output (using fnmatch.fnmatch()).

The argument is a list of lines which have to match and can use glob wildcards. If they do not match a pytest.fail() is called. The matches and non-matches are also shown as part of the error message.

Parameters
  • lines2 – string patterns to match.

  • consecutive (bool) – match lines consecutively?

  • complete (bool) – require all lines to be matched in total

re_match_lines(lines2: Sequence[str], *, consecutive: bool = False, complete: bool = False) None[source]

Check lines exist in the output (using re.match()).

The argument is a list of lines which have to match using re.match. If they do not match a pytest.fail() is called.

The matches and non-matches are also shown as part of the error message.

Parameters
  • lines2 – string patterns to match.

  • consecutive (bool) – match lines consecutively?

  • complete (bool) – require all lines to be matched in total

no_fnmatch_line(pat: str) None[source]

Ensure captured lines do not match the given pattern, using fnmatch.fnmatch.

Parameters

pat (str) – the pattern to match lines.

no_re_match_line(pat: str) None[source]

Ensure captured lines do not match the given pattern, using re.match.

Parameters

pat (str) – the regular expression to match lines.

str() str[source]

Return the entire original text.

recwarn

Tutorial: Asserting warnings with the warns function

recwarn()[source]

Return a WarningsRecorder instance that records all warnings emitted by test functions.

See http://docs.python.org/library/warnings.html for information on warning categories.

class WarningsRecorder[source]

A context manager to record raised warnings.

Adapted from warnings.catch_warnings.

property list: List[warnings.WarningMessage]

The list of recorded warnings.

pop(cls: Type[Warning] = <class 'Warning'>) warnings.WarningMessage[source]

Pop the first recorded warning, raise exception if not exists.

clear() None[source]

Clear the list of recorded warnings.

Each recorded warning is an instance of warnings.WarningMessage.

Note

RecordedWarning was changed from a plain class to a namedtuple in pytest 3.1

Note

DeprecationWarning and PendingDeprecationWarning are treated differently; see Ensuring code triggers a deprecation warning.

tmp_path

Tutorial: Temporary directories and files

tmp_path()[source]

Return a temporary directory path object which is unique to each test function invocation, created as a sub directory of the base temporary directory. The returned object is a pathlib.Path object.

Note

in python < 3.6 this is a pathlib2.Path

tmp_path_factory

Tutorial: The tmp_path_factory fixture

tmp_path_factory instances have the following methods:

TempPathFactory.mktemp(basename: str, numbered: bool = True) pathlib.Path[source]

Creates a new temporary directory managed by the factory.

Parameters
  • basename – Directory base name, must be a relative path.

  • numbered – If True, ensure the directory is unique by adding a number prefix greater than any existing one: basename="foo" and numbered=True means that this function will create directories named "foo-0", "foo-1", "foo-2" and so on.

Returns

The path to the new directory.

TempPathFactory.getbasetemp() pathlib.Path[source]

return base temporary directory.

tmpdir

Tutorial: Temporary directories and files

tmpdir()[source]

Return a temporary directory path object which is unique to each test function invocation, created as a sub directory of the base temporary directory. The returned object is a py.path.local path object.

tmpdir_factory

Tutorial: The ‘tmpdir_factory’ fixture

tmpdir_factory instances have the following methods:

TempdirFactory.mktemp(basename: str, numbered: bool = True) py._path.local.LocalPath[source]

Same as TempPathFactory.mkdir(), but returns a py.path.local object.

TempdirFactory.getbasetemp() py._path.local.LocalPath[source]

backward compat wrapper for _tmppath_factory.getbasetemp

Hooks

Tutorial: Writing plugins.

Reference to all hooks which can be implemented by conftest.py files and plugins.

Bootstrapping hooks

Bootstrapping hooks called for plugins registered early enough (internal and setuptools plugins).

pytest_load_initial_conftests(early_config, parser, args)[source]

implements the loading of initial conftest files ahead of command line option parsing.

Note

This hook will not be called for conftest.py files, only for setuptools plugins.

Parameters
pytest_cmdline_preparse(config, args)[source]

(Deprecated) modify command line arguments before option parsing.

This hook is considered deprecated and will be removed in a future pytest version. Consider using pytest_load_initial_conftests() instead.

Note

This hook will not be called for conftest.py files, only for setuptools plugins.

Parameters
pytest_cmdline_parse(pluginmanager, args)[source]

return initialized config object, parsing the specified args.

Stops at first non-None result, see firstresult: stop at first non-None result

Note

This hook will only be called for plugin classes passed to the plugins arg when using pytest.main to perform an in-process test run.

Parameters
pytest_cmdline_main(config)[source]

called for performing the main command line action. The default implementation will invoke the configure hooks and runtest_mainloop.

Note

This hook will not be called for conftest.py files, only for setuptools plugins.

Stops at first non-None result, see firstresult: stop at first non-None result

Parameters

config (_pytest.config.Config) – pytest config object

Initialization hooks

Initialization hooks called for plugins and conftest.py files.

pytest_addoption(parser, pluginmanager)[source]

register argparse-style options and ini-style config values, called once at the beginning of a test run.

Note

This function should be implemented only in plugins or conftest.py files situated at the tests root directory due to how pytest discovers plugins during startup.

Parameters

Options can later be accessed through the config object, respectively:

The config object is passed around on many internal objects via the .config attribute or can be retrieved as the pytestconfig fixture.

Note

This hook is incompatible with hookwrapper=True.

pytest_addhooks(pluginmanager)[source]

called at plugin registration time to allow adding new hooks via a call to pluginmanager.add_hookspecs(module_or_class, prefix).

Parameters

pluginmanager (_pytest.config.PytestPluginManager) – pytest plugin manager

Note

This hook is incompatible with hookwrapper=True.

pytest_configure(config)[source]

Allows plugins and conftest files to perform initial configuration.

This hook is called for every plugin and initial conftest file after command line options have been parsed.

After that, the hook is called for other conftest files as they are imported.

Note

This hook is incompatible with hookwrapper=True.

Parameters

config (_pytest.config.Config) – pytest config object

pytest_unconfigure(config)[source]

called before test process is exited.

Parameters

config (_pytest.config.Config) – pytest config object

pytest_sessionstart(session)[source]

called after the Session object has been created and before performing collection and entering the run test loop.

Parameters

session (_pytest.main.Session) – the pytest session object

pytest_sessionfinish(session, exitstatus)[source]

called after whole test run finished, right before returning the exit status to the system.

Parameters
  • session (_pytest.main.Session) – the pytest session object

  • exitstatus (int) – the status which pytest will return to the system

pytest_plugin_registered(plugin, manager)[source]

a new pytest plugin got registered.

Parameters

Note

This hook is incompatible with hookwrapper=True.

Test running hooks

All runtest related hooks receive a pytest.Item object.

pytest_runtestloop(session)[source]

called for performing the main runtest loop (after collection finished).

Stops at first non-None result, see firstresult: stop at first non-None result

Parameters

session (_pytest.main.Session) – the pytest session object

pytest_runtest_protocol(item, nextitem)[source]

implements the runtest_setup/call/teardown protocol for the given test item, including capturing exceptions and calling reporting hooks.

Parameters
  • item – test item for which the runtest protocol is performed.

  • nextitem – the scheduled-to-be-next test item (or None if this is the end my friend). This argument is passed on to pytest_runtest_teardown().

Return boolean

True if no further hook implementations should be invoked.

Stops at first non-None result, see firstresult: stop at first non-None result

pytest_runtest_logstart(nodeid, location)[source]

signal the start of running a single test item.

This hook will be called before pytest_runtest_setup(), pytest_runtest_call() and pytest_runtest_teardown() hooks.

Parameters
  • nodeid (str) – full id of the item

  • location – a triple of (filename, linenum, testname)

pytest_runtest_logfinish(nodeid, location)[source]

signal the complete finish of running a single test item.

This hook will be called after pytest_runtest_setup(), pytest_runtest_call() and pytest_runtest_teardown() hooks.

Parameters
  • nodeid (str) – full id of the item

  • location – a triple of (filename, linenum, testname)

pytest_runtest_setup(item)[source]

called before pytest_runtest_call(item).

pytest_runtest_call(item)[source]

called to execute the test item.

pytest_runtest_teardown(item, nextitem)[source]

called after pytest_runtest_call.

Parameters

nextitem – the scheduled-to-be-next test item (None if no further test item is scheduled). This argument can be used to perform exact teardowns, i.e. calling just enough finalizers so that nextitem only needs to call setup-functions.

pytest_runtest_makereport(item, call)[source]

return a _pytest.runner.TestReport object for the given pytest.Item and _pytest.runner.CallInfo.

Stops at first non-None result, see firstresult: stop at first non-None result

For deeper understanding you may look at the default implementation of these hooks in _pytest.runner and maybe also in _pytest.pdb which interacts with _pytest.capture and its input/output capturing in order to immediately drop into interactive debugging when a test failure occurs.

The _pytest.terminal reported specifically uses the reporting hook to print information about a test run.

pytest_pyfunc_call(pyfuncitem)[source]

call underlying test function.

Stops at first non-None result, see firstresult: stop at first non-None result

Collection hooks

pytest calls the following hooks for collecting files and directories:

pytest_collection(session: Session) Optional[Any][source]

Perform the collection protocol for the given session.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters

session (_pytest.main.Session) – the pytest session object

pytest_ignore_collect(path, config: Config) Optional[Union[bool, Tuple[bool, Optional[str]]]][source]

return True to prevent considering this path for collection. This hook is consulted for all files and directories prior to calling more specific hooks.

Stops at first non-None result, see firstresult: stop at first non-None result, i.e. you should only return False if the file should never get ignored (by other hooks).

It can also return a tuple with a reason/description instead, which gets used for reporting:

return (True, "collect_ignore")
Parameters
pytest_collect_directory(path, parent)[source]

called before traversing a directory for collection files.

Stops at first non-None result, see firstresult: stop at first non-None result

Parameters

path – a py.path.local - the path to analyze

pytest_collect_file(path, parent)[source]

return collection Node or None for the given path. Any new node needs to have the specified parent as a parent.

Parameters

path – a py.path.local - the path to collect

pytest_pycollect_makemodule(path, parent)[source]

return a Module collector or None for the given path. This hook will be called for each matching test module path. The pytest_collect_file hook needs to be used if you want to create test modules for files that do not match as a test module.

Stops at first non-None result, see firstresult: stop at first non-None result

Parameters

path – a py.path.local - the path of module to collect

For influencing the collection of objects in Python modules you can use the following hook:

pytest_pycollect_makeitem(collector, name, obj)[source]

return custom item/collector for a python object in a module, or None.

Stops at first non-None result, see firstresult: stop at first non-None result

pytest_generate_tests(metafunc)[source]

generate (multiple) parametrized calls to a test function.

pytest_make_parametrize_id(config, val, argname)[source]

Return a user-friendly string representation of the given val that will be used by @pytest.mark.parametrize calls. Return None if the hook doesn’t know about val. The parameter name is available as argname, if required.

Stops at first non-None result, see firstresult: stop at first non-None result

Parameters
  • config (_pytest.config.Config) – pytest config object

  • val – the parametrized value

  • argname (str) – the automatic parameter name produced by pytest

After collection is complete, you can modify the order of items, delete or otherwise amend the test items:

pytest_collection_modifyitems(session, config, items)[source]

called after collection has been performed, may filter or re-order the items in-place.

Parameters
pytest_collection_finish(session)[source]

called after collection has been performed and modified.

Parameters

session (_pytest.main.Session) – the pytest session object

Reporting hooks

Session related reporting hooks:

pytest_collectstart(collector)[source]

collector starts collecting.

pytest_make_collect_report(collector)[source]

perform collector.collect() and return a CollectReport.

Stops at first non-None result, see firstresult: stop at first non-None result

pytest_itemcollected(item)[source]

we just collected a test item.

pytest_collectreport(report)[source]

collector finished collecting.

pytest_deselected(items)[source]

called for test items deselected, e.g. by keyword.

pytest_report_header(config, startdir)[source]

return a string or list of strings to be displayed as header info for terminal reporting.

Parameters
  • config (_pytest.config.Config) – pytest config object

  • startdir – py.path object with the starting dir

Note

This function should be implemented only in plugins or conftest.py files situated at the tests root directory due to how pytest discovers plugins during startup.

pytest_report_collectionfinish(config, startdir, items)[source]

New in version 3.2.

return a string or list of strings to be displayed after collection has finished successfully.

This strings will be displayed after the standard “collected X items” message.

Parameters
  • config (_pytest.config.Config) – pytest config object

  • startdir – py.path object with the starting dir

  • items – list of pytest items that are going to be executed; this list should not be modified.

pytest_report_teststatus(report, config)[source]

return result-category, shortletter and verbose word for reporting.

Parameters

config (_pytest.config.Config) – pytest config object

Stops at first non-None result, see firstresult: stop at first non-None result

pytest_terminal_summary(terminalreporter, exitstatus, config)[source]

Add a section to terminal summary reporting.

Parameters
  • terminalreporter (_pytest.terminal.TerminalReporter) – the internal terminal reporter object

  • exitstatus (int) – the exit status that will be reported back to the OS

  • config (_pytest.config.Config) – pytest config object

New in version 4.2: The config parameter.

pytest_fixture_setup(fixturedef, request)[source]

performs fixture setup execution.

Returns

The return value of the call to the fixture function

Stops at first non-None result, see firstresult: stop at first non-None result

Note

If the fixture function returns None, other implementations of this hook function will continue to be called, according to the behavior of the firstresult: stop at first non-None result option.

pytest_fixture_post_finalizer(fixturedef, request)[source]

Called after fixture teardown, but before the cache is cleared, so the fixture result fixturedef.cached_result is still available (not None).

pytest_warning_captured(warning_message, when, item, location)[source]

(Deprecated) Process a warning captured by the internal pytest warnings plugin.

This hook is considered deprecated and will be removed in a future pytest version. Use pytest_warning_recorded() instead.

Parameters
  • warning_message (warnings.WarningMessage) – The captured warning. This is the same object produced by warnings.catch_warnings(), and contains the same attributes as the parameters of warnings.showwarning().

  • when (str) –

    Indicates when the warning was captured. Possible values:

    • "config": during pytest configuration/initialization stage.

    • "collect": during test collection.

    • "runtest": during test execution.

  • item (pytest.Item|None) – The item being executed if when is "runtest", otherwise None.

  • location (tuple) – Holds information about the execution context of the captured warning (filename, linenumber, function). function evaluates to <module> when the execution context is at the module level.

pytest_warning_recorded(warning_message: warnings.WarningMessage, when: str, nodeid: str, location: Tuple[str, int, str])[source]

Process a warning captured by the internal pytest warnings plugin.

Parameters
  • warning_message (warnings.WarningMessage) – The captured warning. This is the same object produced by warnings.catch_warnings(), and contains the same attributes as the parameters of warnings.showwarning().

  • when (str) –

    Indicates when the warning was captured. Possible values:

    • "config": during pytest configuration/initialization stage.

    • "collect": during test collection.

    • "runtest": during test execution.

  • nodeid (str) – full id of the item

  • location (tuple) – Holds information about the execution context of the captured warning (filename, linenumber, function). function evaluates to <module> when the execution context is at the module level.

Central hook for reporting about test execution:

pytest_runtest_logreport(report)[source]

process a test setup/call/teardown report relating to the respective phase of executing a test.

Assertion related hooks:

pytest_assertrepr_compare(config: Config, op: str, left: object, right: object) Optional[List[str]][source]

Return explanation for comparisons in failing assert expressions.

Return None for no custom explanation, otherwise return a list of strings. The strings will be joined by newlines but any newlines in a string will be escaped. Note that all but the first line will be indented slightly, the intention is for the first line to be a summary.

Stops at first non-None result, see firstresult: stop at first non-None result.

Parameters

config (_pytest.config.Config) – pytest config object

pytest_assertion_pass(item, lineno, orig, expl)[source]

(Experimental)

New in version 5.0.

Hook called whenever an assertion passes.

Use this hook to do some processing after a passing assertion. The original assertion information is available in the orig string and the pytest introspected assertion information is available in the expl string.

This hook must be explicitly enabled by the enable_assertion_pass_hook ini-file option:

[pytest]
enable_assertion_pass_hook=true

You need to clean the .pyc files in your project directory and interpreter libraries when enabling this option, as assertions will require to be re-written.

Parameters
  • item (_pytest.nodes.Item) – pytest item object of current test

  • lineno (int) – line number of the assert statement

  • orig (string) – string with original assertion

  • expl (string) – string with assert explanation

Note

This hook is experimental, so its parameters or even the hook itself might be changed/removed without warning in any future pytest release.

If you find this hook useful, please share your feedback opening an issue.

Debugging/Interaction hooks

There are few hooks which can be used for special reporting or interaction with exceptions:

pytest_internalerror(excrepr, excinfo)[source]

called for internal errors.

pytest_keyboard_interrupt(excinfo)[source]

called for keyboard interrupt.

pytest_exception_interact(node, call, report)[source]

called when an exception was raised which can potentially be interactively handled.

This hook is only called if an exception was raised that is not an internal exception like skip.Exception.

pytest_enter_pdb(config, pdb)[source]

called upon pdb.set_trace(), can be used by plugins to take special action just before the python debugger enters in interactive mode.

Parameters

Objects

Full reference to objects accessible from fixtures or hooks.

CallInfo

class CallInfo[source]

Result/Exception info a function invocation.

when

context of invocation

Class

class Class[source]

Bases: _pytest.python.PyCollector

Collector for test methods.

classmethod from_parent(parent, *, name, obj=None)[source]

The public constructor

collect()[source]

returns a list of children (items and collectors) for this collection node.

Collector

class Collector[source]

Bases: _pytest.nodes.Node

Collector instances create children through collect() and thus iteratively build a tree.

exception CollectError[source]

Bases: Exception

an error during collection, contains a custom message.

collect()[source]

returns a list of children (items and collectors) for this collection node.

repr_failure(excinfo)[source]

represent a collection failure.

Config

class Config[source]

Access to configuration values, pluginmanager and plugin hooks.

Variables
  • pluginmanager (PytestPluginManager) – the plugin manager handles plugin registration and hook invocation.

  • option (argparse.Namespace) – access to command line option as attributes.

  • invocation_params (InvocationParams) –

    Object containing the parameters regarding the pytest.main invocation.

    Contains the following read-only attributes:

    • args: tuple of command-line arguments as passed to pytest.main().

    • plugins: list of extra plugins, might be None.

    • dir: directory where pytest.main() was invoked from.

class InvocationParams(args, plugins, dir: pathlib.Path)[source]

Holds parameters passed during pytest.main()

New in version 5.1.

Note

Note that the environment variable PYTEST_ADDOPTS and the addopts ini option are handled by pytest, not being included in the args attribute.

Plugins accessing InvocationParams must be aware of that.

property invocation_dir

Backward compatibility

add_cleanup(func)[source]

Add a function to be called when the config object gets out of use (usually coninciding with pytest_unconfigure).

classmethod fromdictargs(option_dict, args)[source]

constructor usable for subprocesses.

addinivalue_line(name, line)[source]

add a line to an ini-file option. The option must have been declared but might not yet be set in which case the line becomes the the first line in its value.

getini(name: str)[source]

return configuration value from an ini file. If the specified name hasn’t been registered through a prior parser.addini call (usually from a plugin), a ValueError is raised.

getoption(name: str, default=<NOTSET>, skip: bool = False)[source]

return command line option value.

Parameters
  • name – name of the option. You may also specify the literal --OPT option instead of the “dest” option name.

  • default – default value if no option of that name exists.

  • skip – if True raise pytest.skip if option does not exists or has a None value.

getvalue(name, path=None)[source]

(deprecated, use getoption())

getvalueorskip(name, path=None)[source]

(deprecated, use getoption(skip=True))

ExceptionInfo

class ExceptionInfo(excinfo: Optional[Tuple[Type[_E], _E, types.TracebackType]], striptext: str = '', traceback: Optional[_pytest._code.code.Traceback] = None)[source]

wraps sys.exc_info() objects and offers help for navigating the traceback.

classmethod from_exc_info(exc_info: Tuple[Type[_E], _E, types.TracebackType], exprinfo: Optional[str] = None) ExceptionInfo[_E][source]

returns an ExceptionInfo for an existing exc_info tuple.

Warning

Experimental API

Parameters

exprinfo – a text string helping to determine if we should strip AssertionError from the output, defaults to the exception message/__str__()

classmethod from_current(exprinfo: Optional[str] = None) _pytest._code.code.ExceptionInfo[BaseException][source]

returns an ExceptionInfo matching the current traceback

Warning

Experimental API

Parameters

exprinfo – a text string helping to determine if we should strip AssertionError from the output, defaults to the exception message/__str__()

classmethod for_later() _pytest._code.code.ExceptionInfo[_pytest._code.code._E][source]

return an unfilled ExceptionInfo

fill_unfilled(exc_info: Tuple[Type[_E], _pytest._code.code._E, types.TracebackType]) None[source]

fill an unfilled ExceptionInfo created with for_later()

property type: Type[_E]

the exception class

property value: _pytest._code.code._E

the exception value

property tb: types.TracebackType

the exception raw traceback

property typename: str

the type name of the exception

property traceback: _pytest._code.code.Traceback

the traceback

exconly(tryshort: bool = False) str[source]

return the exception as a string

when ‘tryshort’ resolves to True, and the exception is a _pytest._code._AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning)

errisinstance(exc: Union[Type[BaseException], Tuple[Type[BaseException], ...]]) bool[source]

return True if the exception is an instance of exc

getrepr(showlocals: bool = False, style: _TracebackStyle = 'long', abspath: bool = False, tbfilter: bool = True, funcargs: bool = False, truncate_locals: bool = True, chain: bool = True) Union[ReprExceptionInfo, ExceptionChainRepr][source]

Return str()able representation of this exception info.

Parameters
  • showlocals (bool) – Show locals per traceback entry. Ignored if style=="native".

  • style (str) – long|short|no|native traceback style

  • abspath (bool) – If paths should be changed to absolute or left unchanged.

  • tbfilter (bool) – Hide entries that contain a local variable __tracebackhide__==True. Ignored if style=="native".

  • funcargs (bool) – Show fixtures (“funcargs” for legacy purposes) per traceback entry.

  • truncate_locals (bool) – With showlocals==True, make sure locals can be safely represented as strings.

  • chain (bool) – if chained exceptions in Python 3 should be shown.

Changed in version 3.9: Added the chain parameter.

match(regexp: Union[str, Pattern]) Literal[True][source]

Check whether the regular expression regexp matches the string representation of the exception using re.search(). If it matches True is returned. If it doesn’t match an AssertionError is raised.

pytest.ExitCode

class ExitCode(value)[source]

New in version 5.0.

Encodes the valid exit codes by pytest.

Currently users and plugins may supply other exit codes as well.

OK = 0

tests passed

TESTS_FAILED = 1

tests failed

INTERRUPTED = 2

pytest was interrupted

INTERNAL_ERROR = 3

an internal error got in the way

USAGE_ERROR = 4

pytest was misused

NO_TESTS_COLLECTED = 5

pytest couldn’t find tests

FixtureDef

class FixtureDef[source]

Bases: object

A container for a factory definition.

FSCollector

class FSCollector[source]

Bases: _pytest.nodes.Collector

classmethod from_parent(parent, *, fspath)[source]

The public constructor

Function

class Function[source]

Bases: _pytest.python.PyobjMixin, _pytest.nodes.Item

a Function Item is responsible for setting up and executing a Python test function.

originalname

original function name, without any decorations (for example parametrization adds a "[...]" suffix to function names).

New in version 3.0.

classmethod from_parent(parent, **kw)[source]

The public constructor

property function

underlying python ‘function’ object

property funcargnames

alias attribute for fixturenames for pre-2.3 compatibility

runtest() None[source]

execute the underlying test function.

Item

class Item[source]

Bases: _pytest.nodes.Node

a basic test invocation item. Note that for a single function there might be multiple test invocation items.

user_properties

user properties is a list of tuples (name, value) that holds user defined properties for this test.

add_report_section(when: str, key: str, content: str) None[source]

Adds a new report section, similar to what’s done internally to add stdout and stderr captured output:

item.add_report_section("call", "stdout", "report section contents")
Parameters
  • when (str) – One of the possible capture states, "setup", "call", "teardown".

  • key (str) – Name of the section, can be customized at will. Pytest uses "stdout" and "stderr" internally.

  • content (str) – The full contents as a string.

MarkDecorator

class MarkDecorator(mark)[source]

A decorator for test functions and test classes. When applied it will create Mark objects which are often created like this:

mark1 = pytest.mark.NAME              # simple MarkDecorator
mark2 = pytest.mark.NAME(name1=value) # parametrized MarkDecorator

and can then be applied as decorators to test functions:

@mark2
def test_function():
    pass

When a MarkDecorator instance is called it does the following:

  1. If called with a single class as its only positional argument and no additional keyword arguments, it attaches itself to the class so it gets applied automatically to all test cases found in that class.

  2. If called with a single function as its only positional argument and no additional keyword arguments, it attaches a MarkInfo object to the function, containing all the arguments already stored internally in the MarkDecorator.

  3. When called in any other case, it performs a ‘fake construction’ call, i.e. it returns a new MarkDecorator instance with the original MarkDecorator’s content updated with the arguments passed to this call.

Note: The rules above prevent MarkDecorator objects from storing only a single function or class reference as their positional argument with no additional keyword or positional arguments.

property name

alias for mark.name

property args

alias for mark.args

property kwargs

alias for mark.kwargs

with_args(*args, **kwargs)[source]

return a MarkDecorator with extra arguments added

unlike call this can be used even if the sole argument is a callable/class

Returns

MarkDecorator

MarkGenerator

class MarkGenerator[source]

Factory for MarkDecorator objects - exposed as a pytest.mark singleton instance. Example:

import pytest
@pytest.mark.slowtest
def test_function():
   pass

will set a ‘slowtest’ MarkInfo object on the test_function object.

Mark

class Mark(name: str, args, kwargs, param_ids_from: Optional[Mark] = None, param_ids_generated: Optional[List[str]] = None)[source]
name

name of the mark

args

positional arguments of the mark decorator

kwargs

keyword arguments of the mark decorator

combined_with(other: _pytest.mark.structures.Mark) _pytest.mark.structures.Mark[source]
Parameters

other (Mark) – the mark to combine with

Return type

Mark

combines by appending args and merging the mappings

Metafunc

class Metafunc(definition: _pytest.python.FunctionDefinition, fixtureinfo: _pytest.fixtures.FuncFixtureInfo, config: _pytest.config.Config, cls=None, module=None)[source]

Metafunc objects are passed to the pytest_generate_tests hook. They help to inspect a test function and to generate tests according to test configuration or values specified in the class or module where a test function is defined.

config

access to the _pytest.config.Config object for the test session

module

the module object where the test function is defined in.

function

underlying python test function

fixturenames

set of fixture names required by the test function

cls

class object where the test function is defined in or None.

property funcargnames

alias attribute for fixturenames for pre-2.3 compatibility

parametrize(argnames: Union[str, List[str], Tuple[str, ...]], argvalues: Iterable[Union[_pytest.mark.structures.ParameterSet, Sequence[object], object]], indirect: Union[bool, Sequence[str]] = False, ids: Optional[Union[Iterable[Union[None, str, float, int, bool]], Callable[[object], Optional[object]]]] = None, scope: Optional[str] = None, *, _param_mark: Optional[_pytest.mark.structures.Mark] = None) None[source]

Add new invocations to the underlying test function using the list of argvalues for the given argnames. Parametrization is performed during the collection phase. If you need to setup expensive resources see about setting indirect to do it rather at test setup time.

Parameters
  • argnames – a comma-separated string denoting one or more argument names, or a list/tuple of argument strings.

  • argvalues – The list of argvalues determines how often a test is invoked with different argument values. If only one argname was specified argvalues is a list of values. If N argnames were specified, argvalues must be a list of N-tuples, where each tuple-element specifies a value for its respective argname.

  • indirect – The list of argnames or boolean. A list of arguments’ names (subset of argnames). If True the list contains all names from the argnames. Each argvalue corresponding to an argname in this list will be passed as request.param to its respective argname fixture function so that it can perform more expensive setups during the setup phase of a test rather than at collection time.

  • ids

    sequence of (or generator for) ids for argvalues,

    or a callable to return part of the id for each argvalue.

    With sequences (and generators like itertools.count()) the returned ids should be of type string, int, float, bool, or None. They are mapped to the corresponding index in argvalues. None means to use the auto-generated id.

    If it is a callable it will be called for each entry in argvalues, and the return value is used as part of the auto-generated id for the whole set (where parts are joined with dashes (“-“)). This is useful to provide more specific ids for certain items, e.g. dates. Returning None will use an auto-generated id.

    If no ids are provided they will be generated automatically from the argvalues.

  • scope – if specified it denotes the scope of the parameters. The scope is used for grouping tests by parameter instances. It will also override any fixture-function defined scope, allowing to set a dynamic scope using test context or configuration.

Module

class Module[source]

Bases: _pytest.python.PyFile, _pytest.python.PyCollector

Collector for test classes and functions in a module.

collect()[source]

returns a list of children (items and collectors) for this collection node.

Node

class Node[source]

base class for Collector and Item the test collection tree. Collector subclasses have children, Items are terminal nodes.

name

a unique name within the scope of the parent node

parent

the parent collector node.

fspath

filesystem path where this node was collected from (can be None)

keywords

keywords/markers collected from all scopes

own_markers

the marker objects belonging to this node

extra_keyword_matches

allow adding of extra keywords to use for matching

classmethod from_parent(parent: _pytest.nodes.Node, **kw)[source]

Public Constructor for Nodes

This indirection got introduced in order to enable removing the fragile logic from the node constructors.

Subclasses can use super().from_parent(...) when overriding the construction

Parameters

parent – the parent node of this test Node

property ihook

fspath sensitive hook proxy used to call pytest hooks

warn(warning)[source]

Issue a warning for this item.

Warnings will be displayed after the test session, unless explicitly suppressed

Parameters

warning (Warning) – the warning instance to issue. Must be a subclass of PytestWarning.

Raises

ValueError – if warning instance is not a subclass of PytestWarning.

Example usage:

node.warn(PytestWarning("some message"))
property nodeid: str

a ::-separated string denoting its collection tree address.

listchain() List[_pytest.nodes.Node][source]

return list of all parent collectors up to self, starting from root of collection tree.

add_marker(marker: Union[str, _pytest.mark.structures.MarkDecorator], append: bool = True) None[source]

dynamically add a marker object to the node.

Parameters

marker (str or pytest.mark.* object) – append=True whether to append the marker, if False insert at position 0.

iter_markers(name=None)[source]
Parameters

name – if given, filter the results by the name attribute

iterate over all markers of the node

for ... in iter_markers_with_node(name=None)[source]
Parameters

name – if given, filter the results by the name attribute

iterate over all markers of the node returns sequence of tuples (node, mark)

get_closest_marker(name: str, default: Optional[_pytest.mark.structures.Mark] = None) Optional[_pytest.mark.structures.Mark][source]

return the first marker matching the name, from closest (for example function) to farther level (for example module level).

Parameters
  • default – fallback return value of no marker was found

  • name – name to filter by

listextrakeywords()[source]

Return a set of all extra keywords in self and any parents.

addfinalizer(fin)[source]

register a function to be called when this node is finalized.

This method can only be called when this node is active in a setup chain, for example during self.setup().

getparent(cls)[source]

get the next parent node (including ourself) which is an instance of the given class

Parser

class Parser[source]

Parser for command line arguments and ini-file values.

Variables

extra_info – dict of generic param -> value to display in case there’s an error processing the command line arguments.

getgroup(name: str, description: str = '', after: Optional[str] = None) _pytest.config.argparsing.OptionGroup[source]

get (or create) a named option Group.

Name

name of the option group.

Description

long description for –help output.

After

name of other group, used for ordering –help output.

The returned group object has an addoption method with the same signature as parser.addoption but will be shown in the respective group in the output of pytest. --help.

addoption(*opts: str, **attrs: Any) None[source]

register a command line option.

Opts

option names, can be short or long options.

Attrs

same attributes which the add_argument() function of the argparse library accepts.

After command line parsing options are available on the pytest config object via config.option.NAME where NAME is usually set by passing a dest attribute, for example addoption("--long", dest="NAME", ...).

parse_known_args(args: Sequence[Union[str, py._path.local.LocalPath]], namespace: Optional[argparse.Namespace] = None) argparse.Namespace[source]

parses and returns a namespace object with known arguments at this point.

parse_known_and_unknown_args(args: Sequence[Union[str, py._path.local.LocalPath]], namespace: Optional[argparse.Namespace] = None) Tuple[argparse.Namespace, List[str]][source]

parses and returns a namespace object with known arguments, and the remaining arguments unknown at this point.

addini(name: str, help: Optional[str], type: Optional[Literal['pathlist', 'args', 'linelist', 'bool', 'int']] = None, *args, **kwargs) None[source]

register an ini-file option.

Parameters
  • name (str) – name of the ini-variable

  • help – help text to display (None suppresses it).

  • type (str) – type of the variable, one of pathlist, args, linelist, bool, or int.

  • default – default value if no ini-file option exists but is queried.

The value of ini-variables can be retrieved via a call to config.getini(name).

PluginManager

class PluginManager[source]

Core PluginManager class which manages registration of plugin objects and 1:N hook calling.

You can register new hooks by calling add_hookspecs(module_or_class). You can register plugin objects (which contain hooks) by calling register(plugin). The PluginManager is initialized with a prefix that is searched for in the names of the dict of registered plugin objects.

For debugging purposes you can call PluginManager.enable_tracing() which will subsequently send debug information to the trace helper.

register(plugin, name=None)[source]

Register a plugin and return its canonical name or None if the name is blocked from registering. Raise a ValueError if the plugin is already registered.

unregister(plugin=None, name=None)[source]

unregister a plugin object and all its contained hook implementations from internal data structures.

set_blocked(name)[source]

block registrations of the given name, unregister if already registered.

is_blocked(name)[source]

return True if the given plugin name is blocked.

add_hookspecs(module_or_class)[source]

add new hook specifications defined in the given module_or_class. Functions are recognized if they have been decorated accordingly.

get_plugins()[source]

return the set of registered plugins.

is_registered(plugin)[source]

Return True if the plugin is already registered.

get_canonical_name(plugin)[source]

Return canonical name for a plugin object. Note that a plugin may be registered under a different name which was specified by the caller of register(plugin, name). To obtain the name of an registered plugin use get_name(plugin) instead.

get_plugin(name)[source]

Return a plugin or None for the given name.

has_plugin(name)[source]

Return True if a plugin with the given name is registered.

get_name(plugin)[source]

Return name for registered plugin or None if not registered.

check_pending()[source]

Verify that all hooks which have not been verified against a hook specification are optional, otherwise raise PluginValidationError.

load_setuptools_entrypoints(group, name=None)[source]

Load modules from querying the specified setuptools group.

Parameters
  • group (str) – entry point group to load plugins

  • name (str) – if given, loads only plugins with the given name.

Return type

int

Returns

return the number of loaded plugins by this call.

list_plugin_distinfo()[source]

return list of distinfo/plugin tuples for all setuptools registered plugins.

list_name_plugin()[source]

return list of name/plugin pairs.

get_hookcallers(plugin)[source]

get all hook callers for the specified plugin.

add_hookcall_monitoring(before, after)[source]

add before/after tracing functions for all hooks and return an undo function which, when called, will remove the added tracers.

before(hook_name, hook_impls, kwargs) will be called ahead of all hook calls and receive a hookcaller instance, a list of HookImpl instances and the keyword arguments for the hook call.

after(outcome, hook_name, hook_impls, kwargs) receives the same arguments as before but also a pluggy._callers._Result object which represents the result of the overall hook call.

enable_tracing()[source]

enable tracing of hook calls and return an undo function.

subset_hook_caller(name, remove_plugins)[source]

Return a new _hooks._HookCaller instance for the named method which manages calls to all registered plugins except the ones from remove_plugins.

PytestPluginManager

class PytestPluginManager[source]

Bases: pluggy._manager.PluginManager

Overwrites pluggy.PluginManager to add pytest-specific functionality:

  • loading plugins from the command line, PYTEST_PLUGINS env variable and pytest_plugins global variables found in plugins being loaded;

  • conftest.py loading during start-up;

parse_hookimpl_opts(plugin, name)[source]
parse_hookspec_opts(module_or_class, name)[source]
register(plugin, name=None)[source]

Register a plugin and return its canonical name or None if the name is blocked from registering. Raise a ValueError if the plugin is already registered.

getplugin(name)[source]
hasplugin(name)[source]

Return True if the plugin with the given name is registered.

pytest_configure(config)[source]
consider_preparse(args, *, exclude_only=False)[source]
consider_pluginarg(arg)[source]
consider_conftest(conftestmodule)[source]
consider_env()[source]
consider_module(mod)[source]
import_plugin(modname, consider_entry_points=False)[source]

Imports a plugin with modname. If consider_entry_points is True, entry point names are also considered to find a plugin.

Session

class Session[source]

Bases: _pytest.nodes.FSCollector

exception Interrupted

Bases: KeyboardInterrupt

Signals an interrupted test run.

exception Failed

Bases: Exception

signals a stop as failed test run.

pytest_deselected(items)[source]

Keep track of explicitly deselected items.

for ... in collect()[source]

returns a list of children (items and collectors) for this collection node.

TestReport

class TestReport[source]

Basic test report object (also used for setup and teardown calls if they fail).

nodeid = None

normalized collection node id

location = None

a (filesystempath, lineno, domaininfo) tuple indicating the actual location of a test item - it might be different from the collected one e.g. if a method is inherited from a different module.

keywords

a name -> value dictionary containing all keywords and markers associated with a test invocation.

outcome

test outcome, always one of “passed”, “failed”, “skipped”.

longrepr = None

None or a failure representation.

when = None

one of ‘setup’, ‘call’, ‘teardown’ to indicate runtest phase.

user_properties

user properties is a list of tuples (name, value) that holds user defined properties of the test

sections = []

list of pairs (str, str) of extra information which needs to marshallable. Used by pytest to add captured text from stdout and stderr, but may be used by other plugins to add arbitrary information to reports.

duration

time it took to run just the test

classmethod from_item_and_call(item, call) _pytest.reports.TestReport[source]

Factory method to create and fill a TestReport with standard item and call info.

property caplog

Return captured log lines, if log capturing is enabled

New in version 3.5.

property capstderr

Return captured text from stderr, if capturing is enabled

New in version 3.0.

property capstdout

Return captured text from stdout, if capturing is enabled

New in version 3.0.

property count_towards_summary

Experimental

Returns True if this report should be counted towards the totals shown at the end of the test session: “1 passed, 1 failure, etc”.

Note

This function is considered experimental, so beware that it is subject to changes even in patch releases.

property head_line

Experimental

Returns the head line shown with longrepr output for this report, more commonly during traceback representation during failures:

________ Test.foo ________

In the example above, the head_line is “Test.foo”.

Note

This function is considered experimental, so beware that it is subject to changes even in patch releases.

property longreprtext

Read-only property that returns the full string representation of longrepr.

New in version 3.0.

_Result

class _Result(result, excinfo)[source]
force_result(result)[source]

Force the result(s) to result.

If the hook was marked as a firstresult a single value should be set otherwise set a (modified) list of results. Any exceptions found during invocation will be deleted.

get_result()[source]

Get the result(s) for this hook call.

If the hook was marked as a firstresult only a single value will be returned otherwise a list of results.

Special Variables

pytest treats some global variables in a special manner when defined in a test module.

collect_ignore

Tutorial: Customizing test collection

Can be declared in conftest.py files to exclude test directories or modules. Needs to be list[str].

collect_ignore = ["setup.py"]

collect_ignore_glob

Tutorial: Customizing test collection

Can be declared in conftest.py files to exclude test directories or modules with Unix shell-style wildcards. Needs to be list[str] where str can contain glob patterns.

collect_ignore_glob = ["*_ignore.py"]

pytest_plugins

Tutorial: Requiring/Loading plugins in a test module or conftest file

Can be declared at the global level in test modules and conftest.py files to register additional plugins. Can be either a str or Sequence[str].

pytest_plugins = "myapp.testsupport.myplugin"
pytest_plugins = ("myapp.testsupport.tools", "myapp.testsupport.regression")

pytestmark

Tutorial: Marking whole classes or modules

Can be declared at the global level in test modules to apply one or more marks to all test functions and methods. Can be either a single mark or a list of marks.

import pytest

pytestmark = pytest.mark.webtest
import pytest

pytestmark = [pytest.mark.integration, pytest.mark.slow]

PYTEST_DONT_REWRITE (module docstring)

The text PYTEST_DONT_REWRITE can be add to any module docstring to disable assertion rewriting for that module.

Environment Variables

Environment variables that can be used to change pytest’s behavior.

PYTEST_ADDOPTS

This contains a command-line (parsed by the py:mod:shlex module) that will be prepended to the command line given by the user, see How to change command line options defaults for more information.

PYTEST_DEBUG

When set, pytest will print tracing and debug information.

PYTEST_PLUGINS

Contains comma-separated list of modules that should be loaded as plugins:

export PYTEST_PLUGINS=mymodule.plugin,xdist

PYTEST_DISABLE_PLUGIN_AUTOLOAD

When set, disables plugin auto-loading through setuptools entrypoints. Only explicitly specified plugins will be loaded.

PYTEST_CURRENT_TEST

This is not meant to be set by users, but is set by pytest internally with the name of the current test so other processes can inspect it, see PYTEST_CURRENT_TEST environment variable for more information.

Exceptions

UsageError

class UsageError[source]

error in pytest usage or invocation

Configuration Options

Here is a list of builtin configuration options that may be written in a pytest.ini, tox.ini or setup.cfg file, usually located at the root of your repository. All options must be under a [pytest] section ([tool:pytest] for setup.cfg files).

Warning

Usage of setup.cfg is not recommended unless for very simple use cases. .cfg files use a different parser than pytest.ini and tox.ini which might cause hard to track down problems. When possible, it is recommended to use the latter files to hold your pytest configuration.

Configuration file options may be overwritten in the command-line by using -o/--override, which can also be passed multiple times. The expected format is name=value. For example:

pytest -o console_output_style=classic -o cache_dir=/tmp/mycache
addopts

Add the specified OPTS to the set of command line arguments as if they had been specified by the user. Example: if you have this ini file content:

# content of pytest.ini
[pytest]
addopts = --maxfail=2 -rf  # exit after 2 failures, report fail info

issuing pytest test_hello.py actually means:

pytest --maxfail=2 -rf test_hello.py

Default is to add no options.

cache_dir

Sets a directory where stores content of cache plugin. Default directory is .pytest_cache which is created in rootdir. Directory may be relative or absolute path. If setting relative path, then directory is created relative to rootdir. Additionally path may contain environment variables, that will be expanded. For more information about cache plugin please refer to Cache: working with cross-testrun state.

confcutdir

Sets a directory where search upwards for conftest.py files stops. By default, pytest will stop searching for conftest.py files upwards from pytest.ini/tox.ini/setup.cfg of the project if any, or up to the file-system root.

console_output_style

Sets the console output style while running tests:

  • classic: classic pytest output.

  • progress: like classic pytest output, but with a progress indicator.

  • count: like progress, but shows progress as the number of tests completed instead of a percent.

The default is progress, but you can fallback to classic if you prefer or the new mode is causing unexpected problems:

# content of pytest.ini
[pytest]
console_output_style = classic
doctest_encoding

Default encoding to use to decode text files with docstrings. See how pytest handles doctests.

doctest_optionflags

One or more doctest flag names from the standard doctest module. See how pytest handles doctests.

empty_parameter_set_mark

Allows to pick the action for empty parametersets in parameterization

  • skip skips tests with an empty parameterset (default)

  • xfail marks tests with an empty parameterset as xfail(run=False)

  • fail_at_collect raises an exception if parametrize collects an empty parameter set

# content of pytest.ini
[pytest]
empty_parameter_set_mark = xfail

Note

The default value of this option is planned to change to xfail in future releases as this is considered less error prone, see #3155 for more details.

faulthandler_timeout

Dumps the tracebacks of all threads if a test takes longer than X seconds to run (including fixture setup and teardown). Implemented using the faulthandler.dump_traceback_later function, so all caveats there apply.

# content of pytest.ini
[pytest]
faulthandler_timeout=5

For more information please refer to Fault Handler.

filterwarnings

Sets a list of filters and actions that should be taken for matched warnings. By default all warnings emitted during the test session will be displayed in a summary at the end of the test session.

# content of pytest.ini
[pytest]
filterwarnings =
    error
    ignore::DeprecationWarning

This tells pytest to ignore deprecation warnings and turn all other warnings into errors. For more information please refer to Warnings Capture.

junit_duration_report

New in version 4.1.

Configures how durations are recorded into the JUnit XML report:

  • total (the default): duration times reported include setup, call, and teardown times.

  • call: duration times reported include only call times, excluding setup and teardown.

[pytest]
junit_duration_report = call
junit_family

New in version 4.2.

Configures the format of the generated JUnit XML file. The possible options are:

  • xunit1 (or legacy): produces old style output, compatible with the xunit 1.0 format. This is the default.

  • xunit2: produces xunit 2.0 style output,

    which should be more compatible with latest Jenkins versions.

[pytest]
junit_family = xunit2
junit_logging

New in version 3.5.

Changed in version 5.4: log, all, out-err options added.

Configures if captured output should be written to the JUnit XML file. Valid values are:

  • log: write only logging captured output.

  • system-out: write captured stdout contents.

  • system-err: write captured stderr contents.

  • out-err: write both captured stdout and stderr contents.

  • all: write captured logging, stdout and stderr contents.

  • no (the default): no captured output is written.

[pytest]
junit_logging = system-out
junit_log_passing_tests

New in version 4.6.

If junit_logging != "no", configures if the captured output should be written to the JUnit XML file for passing tests. Default is True.

[pytest]
junit_log_passing_tests = False
junit_suite_name

To set the name of the root test suite xml item, you can configure the junit_suite_name option in your config file:

[pytest]
junit_suite_name = my_suite
log_auto_indent

Allow selective auto-indentation of multiline log messages.

Supports command line option --log-auto-indent [value] and config option log_auto_indent = [value] to set the auto-indentation behavior for all logging.

[value] can be:
  • True or “On” - Dynamically auto-indent multiline log messages

  • False or “Off” or 0 - Do not auto-indent multiline log messages (the default behavior)

  • [positive integer] - auto-indent multiline log messages by [value] spaces

[pytest]
log_auto_indent = False

Supports passing kwarg extra={"auto_indent": [value]} to calls to logging.log() to specify auto-indentation behavior for a specific entry in the log. extra kwarg overrides the value specified on the command line or in the config.

log_cli

Enable log display during test run (also known as “live logging”). The default is False.

[pytest]
log_cli = True
log_cli_date_format

Sets a time.strftime()-compatible string that will be used when formatting dates for live logging.

[pytest]
log_cli_date_format = %Y-%m-%d %H:%M:%S

For more information, see Live Logs.

log_cli_format

Sets a logging-compatible string used to format live logging messages.

[pytest]
log_cli_format = %(asctime)s %(levelname)s %(message)s

For more information, see Live Logs.

log_cli_level

Sets the minimum log message level that should be captured for live logging. The integer value or the names of the levels can be used.

[pytest]
log_cli_level = INFO

For more information, see Live Logs.

log_date_format

Sets a time.strftime()-compatible string that will be used when formatting dates for logging capture.

[pytest]
log_date_format = %Y-%m-%d %H:%M:%S

For more information, see Logging.

log_file

Sets a file name relative to the pytest.ini file where log messages should be written to, in addition to the other logging facilities that are active.

[pytest]
log_file = logs/pytest-logs.txt

For more information, see Logging.

log_file_date_format

Sets a time.strftime()-compatible string that will be used when formatting dates for the logging file.

[pytest]
log_file_date_format = %Y-%m-%d %H:%M:%S

For more information, see Logging.

log_file_format

Sets a logging-compatible string used to format logging messages redirected to the logging file.

[pytest]
log_file_format = %(asctime)s %(levelname)s %(message)s

For more information, see Logging.

log_file_level

Sets the minimum log message level that should be captured for the logging file. The integer value or the names of the levels can be used.

[pytest]
log_file_level = INFO

For more information, see Logging.

log_format

Sets a logging-compatible string used to format captured logging messages.

[pytest]
log_format = %(asctime)s %(levelname)s %(message)s

For more information, see Logging.

log_level

Sets the minimum log message level that should be captured for logging capture. The integer value or the names of the levels can be used.

[pytest]
log_level = INFO

For more information, see Logging.

log_print

If set to False, will disable displaying captured logging messages for failed tests.

[pytest]
log_print = False

For more information, see Logging.

markers

When the --strict-markers or --strict command-line arguments are used, only known markers - defined in code by core pytest or some plugin - are allowed.

You can list additional markers in this setting to add them to the whitelist, in which case you probably want to add --strict-markers to addopts to avoid future regressions:

[pytest]
addopts = --strict-markers
markers =
    slow
    serial
minversion

Specifies a minimal pytest version required for running tests.

# content of pytest.ini
[pytest]
minversion = 3.0  # will fail if we run with pytest-2.8
norecursedirs

Set the directory basename patterns to avoid when recursing for test discovery. The individual (fnmatch-style) patterns are applied to the basename of a directory to decide if to recurse into it. Pattern matching characters:

*       matches everything
?       matches any single character
[seq]   matches any character in seq
[!seq]  matches any char not in seq

Default patterns are '.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'. Setting a norecursedirs replaces the default. Here is an example of how to avoid certain directories:

[pytest]
norecursedirs = .svn _build tmp*

This would tell pytest to not look into typical subversion or sphinx-build directories or into any tmp prefixed directory.

Additionally, pytest will attempt to intelligently identify and ignore a virtualenv by the presence of an activation script. Any directory deemed to be the root of a virtual environment will not be considered during test collection unless ‑‑collect‑in‑virtualenv is given. Note also that norecursedirs takes precedence over ‑‑collect‑in‑virtualenv; e.g. if you intend to run tests in a virtualenv with a base directory that matches '.*' you must override norecursedirs in addition to using the ‑‑collect‑in‑virtualenv flag.

python_classes

One or more name prefixes or glob-style patterns determining which classes are considered for test collection. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any class prefixed with Test as a test collection. Here is an example of how to collect tests from classes that end in Suite:

[pytest]
python_classes = *Suite

Note that unittest.TestCase derived classes are always collected regardless of this option, as unittest’s own collection framework is used to collect those tests.

python_files

One or more Glob-style file patterns determining which python files are considered as test modules. Search for multiple glob patterns by adding a space between patterns:

[pytest]
python_files = test_*.py check_*.py example_*.py

Or one per line:

[pytest]
python_files =
    test_*.py
    check_*.py
    example_*.py

By default, files matching test_*.py and *_test.py will be considered test modules.

python_functions

One or more name prefixes or glob-patterns determining which test functions and methods are considered tests. Search for multiple glob patterns by adding a space between patterns. By default, pytest will consider any function prefixed with test as a test. Here is an example of how to collect test functions and methods that end in _test:

[pytest]
python_functions = *_test

Note that this has no effect on methods that live on a unittest .TestCase derived class, as unittest’s own collection framework is used to collect those tests.

See Changing naming conventions for more detailed examples.

testpaths

Sets list of directories that should be searched for tests when no specific directories, files or test ids are given in the command line when executing pytest from the rootdir directory. Useful when all project tests are in a known location to speed up test collection and to avoid picking up undesired tests by accident.

[pytest]
testpaths = testing doc

This tells pytest to only look for tests in testing and doc directories when executing from the root directory.

usefixtures

List of fixtures that will be applied to all test functions; this is semantically the same to apply the @pytest.mark.usefixtures marker to all test functions.

[pytest]
usefixtures =
    clean_db
xfail_strict

If set to True, tests marked with @pytest.mark.xfail that actually succeed will by default fail the test suite. For more information, see strict parameter.

[pytest]
xfail_strict = True