Spiral Energetics

pytest-xdist (Use Multiple CPU Cores)

Add pytest-xdist to your dependencies, and then run your tests with pytest -n auto. Pytest will launch as many Python interpreters as you have CPU cores, and distribute the tests among them.

As long as the cost of launching the extra Python interpreters is not larger than the savings made by running the tests across multiple cores, you'll get a very nice performance increase.

pytest-order (Custom Test Order)

Once you start using pytest-xdist, you want to avoid a situation where where Pytest only starts executing the expensive tests near the very end, after it's finished most of the cheap tests. If the expensive tests are executed last, then this can result in one or two cores being busy, while the rest of the cores are idle, as there are no other tests to run. This will be especially likely to cause a problem if you have a relatively small number of very expensive tests, and a relatively large number of very cheap tests.

Ideally, the really expensive tests are run first. The cheaper tests can then run whenever there's a free core. Essentially, the cheap tests can act as backfill, whenever a core has no more expensive tests to run.

Add pytest-order to your dependencies. You can then add ordering decorators to your expensive test functions, to force Pytest to execute them first:

@pytest.mark.order(1)
def test_summary_stat_1() -> None:
    ...

@pytest.mark.order(2)
def test_summary_stat_2() -> None:
    ...

Scoped Test Fixtures

If you're not mutating your fixtures, and they are even remotely expensive to compute, then adding module-level, package-level, or even session-level scopes to them cuts down on duplicate work.

If you have a fixture like:

@pytest.fixture
def reference_data() -> npt.NDArray[np.float64]:
    ...

def test_summary_stat_1(reference_data: npt.NDArray[np.float64]) -> None:
    ...

def test_summary_stat_2(reference_data: npt.NDArray[np.float64]) -> None:
    ...

Then by default, the fixture will be created and torn down two times: once for both test_summary_stat_1() and test_summary_stat_2(). As long as we aren't mutating the return value of reference_data() anywhere, this results in wasted work. To fix this, we can add a scope to it:

"module"-scoped

@pytest.fixture(scope="module")
def reference_data() -> npt.NDArray[np.float64]:
    ...

This will cause the fixture to only be created and torn down once for this module.

"package"-scoped

Choose the package where you want your fixture to reside, and then create a conftest.py file at the root of the package, and put the fixture inside that file:

@pytest.fixture(scope="package")
def reference_data() -> npt.NDArray[np.float64]:
    ...

Pytest will automatically execute the conftest module before reading the rest of the package.

This means that the fixture will be available globally within that package, and also cause the fixture to only be created and torn down once, for that entire package.

"session"-scoped

Create a conftest.py file at the root of your repo, and put the fixture inside that file:

@pytest.fixture(scope="session")
def reference_data() -> npt.NDArray[np.float64]:
    ...

This means that the fixture will be available globally (within any test), and also cause the fixture to only be created and torn down once, for every pytest invocation.

Conditional Test Execution

Sometimes, you'll have some unavoidably slow test functions. In this case, you can modify pytest to only run them if you pass in the --run-slow argument. Add the following to your conftest.py at the repo root:

def pytest_addoption(parser: Any) -> None:
    parser.addoption(
        "--run-slow",
        action="store_true",
        default=False,
        help="run slow tests"
    )


def pytest_configure(config: Any) -> None:
    config.addinivalue_line(
        "markers",
        "slow: mark test as slow to run, meaning it will be skipped by default"
    )

def pytest_collection_modifyitems(config: Any, items: Any) -> None:
    if config.getoption("--run-slow") is False:
        # skip @pytest.mark.run_slow functions by default:
        skip_reason = pytest.mark.skip(
            reason="need --run-slow command-line option to run"
        )

        for item in items:
            if "slow" in item.keywords:
                item.add_marker(skip_reason)

You can then add @pytest.mark.slow as a decorator to your slow test functions:

@pytest.mark.slow
def test_summary_stat() -> None:
    ...

This will cause this test to only be run if you run pytest --run-slow.

Obviously, you could invert the logic, and make slow tests skippable with something like pytest --skip-slow, as well.