KS
Killer-Skills

python-testing — how to use python-testing how to use python-testing, python-testing setup guide, pytest tutorial for python-testing, TDD methodology in python-testing, python-testing vs unittest, python-testing install for AI agents, improving test coverage with python-testing, python-testing best practices for developers

Verified
v1.0.0
GitHub

About this Skill

Essential for Python Development Agents requiring comprehensive test automation and quality assurance capabilities. python-testing is a set of testing strategies for Python applications using pytest, TDD methodology, fixtures, mocking, parametrization, and coverage requirements.

Features

Implements Test-Driven Development (TDD) cycle with RED, GREEN, and refactor stages
Utilizes pytest for efficient testing
Supports fixtures, mocking, and parametrization for robust test suites
Enforces coverage requirements for thorough testing
Facilitates setting up testing infrastructure for Python projects

# Core Topics

affaan-m affaan-m
[62.0k]
[7678]
Updated: 3/6/2026

Quality Score

Top 5%
80
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add affaan-m/everything-claude-code/python-testing

Agent Capability Analysis

The python-testing MCP Server by affaan-m is an open-source Categories.official integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use python-testing, python-testing setup guide, pytest tutorial for python-testing.

Ideal Agent Persona

Essential for Python Development Agents requiring comprehensive test automation and quality assurance capabilities.

Core Value

Enables agents to implement pytest-driven testing strategies including fixture management, mocking, parametrization, and coverage analysis. Provides TDD methodology enforcement for Python code development cycles.

Capabilities Granted for python-testing MCP Server

Implementing TDD workflows (red-green-refactor)
Generating parametrized test cases for edge coverage
Creating mock objects for isolated unit testing
Analyzing test coverage metrics for quality assurance
Setting up pytest fixtures for test infrastructure

! Prerequisites & Limits

  • Python-specific testing framework
  • Requires pytest installation
  • Focuses on unit/integration testing patterns
Project
SKILL.md
18.2 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8
SKILL.md
Readonly

Python Testing Patterns

Comprehensive testing strategies for Python applications using pytest, TDD methodology, and best practices.

When to Activate

  • Writing new Python code (follow TDD: red, green, refactor)
  • Designing test suites for Python projects
  • Reviewing Python test coverage
  • Setting up testing infrastructure

Core Testing Philosophy

Test-Driven Development (TDD)

Always follow the TDD cycle:

  1. RED: Write a failing test for the desired behavior
  2. GREEN: Write minimal code to make the test pass
  3. REFACTOR: Improve code while keeping tests green
python
1# Step 1: Write failing test (RED) 2def test_add_numbers(): 3 result = add(2, 3) 4 assert result == 5 5 6# Step 2: Write minimal implementation (GREEN) 7def add(a, b): 8 return a + b 9 10# Step 3: Refactor if needed (REFACTOR)

Coverage Requirements

  • Target: 80%+ code coverage
  • Critical paths: 100% coverage required
  • Use pytest --cov to measure coverage
bash
1pytest --cov=mypackage --cov-report=term-missing --cov-report=html

pytest Fundamentals

Basic Test Structure

python
1import pytest 2 3def test_addition(): 4 """Test basic addition.""" 5 assert 2 + 2 == 4 6 7def test_string_uppercase(): 8 """Test string uppercasing.""" 9 text = "hello" 10 assert text.upper() == "HELLO" 11 12def test_list_append(): 13 """Test list append.""" 14 items = [1, 2, 3] 15 items.append(4) 16 assert 4 in items 17 assert len(items) == 4

Assertions

python
1# Equality 2assert result == expected 3 4# Inequality 5assert result != unexpected 6 7# Truthiness 8assert result # Truthy 9assert not result # Falsy 10assert result is True # Exactly True 11assert result is False # Exactly False 12assert result is None # Exactly None 13 14# Membership 15assert item in collection 16assert item not in collection 17 18# Comparisons 19assert result > 0 20assert 0 <= result <= 100 21 22# Type checking 23assert isinstance(result, str) 24 25# Exception testing (preferred approach) 26with pytest.raises(ValueError): 27 raise ValueError("error message") 28 29# Check exception message 30with pytest.raises(ValueError, match="invalid input"): 31 raise ValueError("invalid input provided") 32 33# Check exception attributes 34with pytest.raises(ValueError) as exc_info: 35 raise ValueError("error message") 36assert str(exc_info.value) == "error message"

Fixtures

Basic Fixture Usage

python
1import pytest 2 3@pytest.fixture 4def sample_data(): 5 """Fixture providing sample data.""" 6 return {"name": "Alice", "age": 30} 7 8def test_sample_data(sample_data): 9 """Test using the fixture.""" 10 assert sample_data["name"] == "Alice" 11 assert sample_data["age"] == 30

Fixture with Setup/Teardown

python
1@pytest.fixture 2def database(): 3 """Fixture with setup and teardown.""" 4 # Setup 5 db = Database(":memory:") 6 db.create_tables() 7 db.insert_test_data() 8 9 yield db # Provide to test 10 11 # Teardown 12 db.close() 13 14def test_database_query(database): 15 """Test database operations.""" 16 result = database.query("SELECT * FROM users") 17 assert len(result) > 0

Fixture Scopes

python
1# Function scope (default) - runs for each test 2@pytest.fixture 3def temp_file(): 4 with open("temp.txt", "w") as f: 5 yield f 6 os.remove("temp.txt") 7 8# Module scope - runs once per module 9@pytest.fixture(scope="module") 10def module_db(): 11 db = Database(":memory:") 12 db.create_tables() 13 yield db 14 db.close() 15 16# Session scope - runs once per test session 17@pytest.fixture(scope="session") 18def shared_resource(): 19 resource = ExpensiveResource() 20 yield resource 21 resource.cleanup()

Fixture with Parameters

python
1@pytest.fixture(params=[1, 2, 3]) 2def number(request): 3 """Parameterized fixture.""" 4 return request.param 5 6def test_numbers(number): 7 """Test runs 3 times, once for each parameter.""" 8 assert number > 0

Using Multiple Fixtures

python
1@pytest.fixture 2def user(): 3 return User(id=1, name="Alice") 4 5@pytest.fixture 6def admin(): 7 return User(id=2, name="Admin", role="admin") 8 9def test_user_admin_interaction(user, admin): 10 """Test using multiple fixtures.""" 11 assert admin.can_manage(user)

Autouse Fixtures

python
1@pytest.fixture(autouse=True) 2def reset_config(): 3 """Automatically runs before every test.""" 4 Config.reset() 5 yield 6 Config.cleanup() 7 8def test_without_fixture_call(): 9 # reset_config runs automatically 10 assert Config.get_setting("debug") is False

Conftest.py for Shared Fixtures

python
1# tests/conftest.py 2import pytest 3 4@pytest.fixture 5def client(): 6 """Shared fixture for all tests.""" 7 app = create_app(testing=True) 8 with app.test_client() as client: 9 yield client 10 11@pytest.fixture 12def auth_headers(client): 13 """Generate auth headers for API testing.""" 14 response = client.post("/api/login", json={ 15 "username": "test", 16 "password": "test" 17 }) 18 token = response.json["token"] 19 return {"Authorization": f"Bearer {token}"}

Parametrization

Basic Parametrization

python
1@pytest.mark.parametrize("input,expected", [ 2 ("hello", "HELLO"), 3 ("world", "WORLD"), 4 ("PyThOn", "PYTHON"), 5]) 6def test_uppercase(input, expected): 7 """Test runs 3 times with different inputs.""" 8 assert input.upper() == expected

Multiple Parameters

python
1@pytest.mark.parametrize("a,b,expected", [ 2 (2, 3, 5), 3 (0, 0, 0), 4 (-1, 1, 0), 5 (100, 200, 300), 6]) 7def test_add(a, b, expected): 8 """Test addition with multiple inputs.""" 9 assert add(a, b) == expected

Parametrize with IDs

python
1@pytest.mark.parametrize("input,expected", [ 2 ("valid@email.com", True), 3 ("invalid", False), 4 ("@no-domain.com", False), 5], ids=["valid-email", "missing-at", "missing-domain"]) 6def test_email_validation(input, expected): 7 """Test email validation with readable test IDs.""" 8 assert is_valid_email(input) is expected

Parametrized Fixtures

python
1@pytest.fixture(params=["sqlite", "postgresql", "mysql"]) 2def db(request): 3 """Test against multiple database backends.""" 4 if request.param == "sqlite": 5 return Database(":memory:") 6 elif request.param == "postgresql": 7 return Database("postgresql://localhost/test") 8 elif request.param == "mysql": 9 return Database("mysql://localhost/test") 10 11def test_database_operations(db): 12 """Test runs 3 times, once for each database.""" 13 result = db.query("SELECT 1") 14 assert result is not None

Markers and Test Selection

Custom Markers

python
1# Mark slow tests 2@pytest.mark.slow 3def test_slow_operation(): 4 time.sleep(5) 5 6# Mark integration tests 7@pytest.mark.integration 8def test_api_integration(): 9 response = requests.get("https://api.example.com") 10 assert response.status_code == 200 11 12# Mark unit tests 13@pytest.mark.unit 14def test_unit_logic(): 15 assert calculate(2, 3) == 5

Run Specific Tests

bash
1# Run only fast tests 2pytest -m "not slow" 3 4# Run only integration tests 5pytest -m integration 6 7# Run integration or slow tests 8pytest -m "integration or slow" 9 10# Run tests marked as unit but not slow 11pytest -m "unit and not slow"

Configure Markers in pytest.ini

ini
1[pytest] 2markers = 3 slow: marks tests as slow 4 integration: marks tests as integration tests 5 unit: marks tests as unit tests 6 django: marks tests as requiring Django

Mocking and Patching

Mocking Functions

python
1from unittest.mock import patch, Mock 2 3@patch("mypackage.external_api_call") 4def test_with_mock(api_call_mock): 5 """Test with mocked external API.""" 6 api_call_mock.return_value = {"status": "success"} 7 8 result = my_function() 9 10 api_call_mock.assert_called_once() 11 assert result["status"] == "success"

Mocking Return Values

python
1@patch("mypackage.Database.connect") 2def test_database_connection(connect_mock): 3 """Test with mocked database connection.""" 4 connect_mock.return_value = MockConnection() 5 6 db = Database() 7 db.connect() 8 9 connect_mock.assert_called_once_with("localhost")

Mocking Exceptions

python
1@patch("mypackage.api_call") 2def test_api_error_handling(api_call_mock): 3 """Test error handling with mocked exception.""" 4 api_call_mock.side_effect = ConnectionError("Network error") 5 6 with pytest.raises(ConnectionError): 7 api_call() 8 9 api_call_mock.assert_called_once()

Mocking Context Managers

python
1@patch("builtins.open", new_callable=mock_open) 2def test_file_reading(mock_file): 3 """Test file reading with mocked open.""" 4 mock_file.return_value.read.return_value = "file content" 5 6 result = read_file("test.txt") 7 8 mock_file.assert_called_once_with("test.txt", "r") 9 assert result == "file content"

Using Autospec

python
1@patch("mypackage.DBConnection", autospec=True) 2def test_autospec(db_mock): 3 """Test with autospec to catch API misuse.""" 4 db = db_mock.return_value 5 db.query("SELECT * FROM users") 6 7 # This would fail if DBConnection doesn't have query method 8 db_mock.assert_called_once()

Mock Class Instances

python
1class TestUserService: 2 @patch("mypackage.UserRepository") 3 def test_create_user(self, repo_mock): 4 """Test user creation with mocked repository.""" 5 repo_mock.return_value.save.return_value = User(id=1, name="Alice") 6 7 service = UserService(repo_mock.return_value) 8 user = service.create_user(name="Alice") 9 10 assert user.name == "Alice" 11 repo_mock.return_value.save.assert_called_once()

Mock Property

python
1@pytest.fixture 2def mock_config(): 3 """Create a mock with a property.""" 4 config = Mock() 5 type(config).debug = PropertyMock(return_value=True) 6 type(config).api_key = PropertyMock(return_value="test-key") 7 return config 8 9def test_with_mock_config(mock_config): 10 """Test with mocked config properties.""" 11 assert mock_config.debug is True 12 assert mock_config.api_key == "test-key"

Testing Async Code

Async Tests with pytest-asyncio

python
1import pytest 2 3@pytest.mark.asyncio 4async def test_async_function(): 5 """Test async function.""" 6 result = await async_add(2, 3) 7 assert result == 5 8 9@pytest.mark.asyncio 10async def test_async_with_fixture(async_client): 11 """Test async with async fixture.""" 12 response = await async_client.get("/api/users") 13 assert response.status_code == 200

Async Fixture

python
1@pytest.fixture 2async def async_client(): 3 """Async fixture providing async test client.""" 4 app = create_app() 5 async with app.test_client() as client: 6 yield client 7 8@pytest.mark.asyncio 9async def test_api_endpoint(async_client): 10 """Test using async fixture.""" 11 response = await async_client.get("/api/data") 12 assert response.status_code == 200

Mocking Async Functions

python
1@pytest.mark.asyncio 2@patch("mypackage.async_api_call") 3async def test_async_mock(api_call_mock): 4 """Test async function with mock.""" 5 api_call_mock.return_value = {"status": "ok"} 6 7 result = await my_async_function() 8 9 api_call_mock.assert_awaited_once() 10 assert result["status"] == "ok"

Testing Exceptions

Testing Expected Exceptions

python
1def test_divide_by_zero(): 2 """Test that dividing by zero raises ZeroDivisionError.""" 3 with pytest.raises(ZeroDivisionError): 4 divide(10, 0) 5 6def test_custom_exception(): 7 """Test custom exception with message.""" 8 with pytest.raises(ValueError, match="invalid input"): 9 validate_input("invalid")

Testing Exception Attributes

python
1def test_exception_with_details(): 2 """Test exception with custom attributes.""" 3 with pytest.raises(CustomError) as exc_info: 4 raise CustomError("error", code=400) 5 6 assert exc_info.value.code == 400 7 assert "error" in str(exc_info.value)

Testing Side Effects

Testing File Operations

python
1import tempfile 2import os 3 4def test_file_processing(): 5 """Test file processing with temp file.""" 6 with tempfile.NamedTemporaryFile(mode='w', delete=False, suffix='.txt') as f: 7 f.write("test content") 8 temp_path = f.name 9 10 try: 11 result = process_file(temp_path) 12 assert result == "processed: test content" 13 finally: 14 os.unlink(temp_path)

Testing with pytest's tmp_path Fixture

python
1def test_with_tmp_path(tmp_path): 2 """Test using pytest's built-in temp path fixture.""" 3 test_file = tmp_path / "test.txt" 4 test_file.write_text("hello world") 5 6 result = process_file(str(test_file)) 7 assert result == "hello world" 8 # tmp_path automatically cleaned up

Testing with tmpdir Fixture

python
1def test_with_tmpdir(tmpdir): 2 """Test using pytest's tmpdir fixture.""" 3 test_file = tmpdir.join("test.txt") 4 test_file.write("data") 5 6 result = process_file(str(test_file)) 7 assert result == "data"

Test Organization

Directory Structure

tests/
├── conftest.py                 # Shared fixtures
├── __init__.py
├── unit/                       # Unit tests
│   ├── __init__.py
│   ├── test_models.py
│   ├── test_utils.py
│   └── test_services.py
├── integration/                # Integration tests
│   ├── __init__.py
│   ├── test_api.py
│   └── test_database.py
└── e2e/                        # End-to-end tests
    ├── __init__.py
    └── test_user_flow.py

Test Classes

python
1class TestUserService: 2 """Group related tests in a class.""" 3 4 @pytest.fixture(autouse=True) 5 def setup(self): 6 """Setup runs before each test in this class.""" 7 self.service = UserService() 8 9 def test_create_user(self): 10 """Test user creation.""" 11 user = self.service.create_user("Alice") 12 assert user.name == "Alice" 13 14 def test_delete_user(self): 15 """Test user deletion.""" 16 user = User(id=1, name="Bob") 17 self.service.delete_user(user) 18 assert not self.service.user_exists(1)

Best Practices

DO

  • Follow TDD: Write tests before code (red-green-refactor)
  • Test one thing: Each test should verify a single behavior
  • Use descriptive names: test_user_login_with_invalid_credentials_fails
  • Use fixtures: Eliminate duplication with fixtures
  • Mock external dependencies: Don't depend on external services
  • Test edge cases: Empty inputs, None values, boundary conditions
  • Aim for 80%+ coverage: Focus on critical paths
  • Keep tests fast: Use marks to separate slow tests

DON'T

  • Don't test implementation: Test behavior, not internals
  • Don't use complex conditionals in tests: Keep tests simple
  • Don't ignore test failures: All tests must pass
  • Don't test third-party code: Trust libraries to work
  • Don't share state between tests: Tests should be independent
  • Don't catch exceptions in tests: Use pytest.raises
  • Don't use print statements: Use assertions and pytest output
  • Don't write tests that are too brittle: Avoid over-specific mocks

Common Patterns

Testing API Endpoints (FastAPI/Flask)

python
1@pytest.fixture 2def client(): 3 app = create_app(testing=True) 4 return app.test_client() 5 6def test_get_user(client): 7 response = client.get("/api/users/1") 8 assert response.status_code == 200 9 assert response.json["id"] == 1 10 11def test_create_user(client): 12 response = client.post("/api/users", json={ 13 "name": "Alice", 14 "email": "alice@example.com" 15 }) 16 assert response.status_code == 201 17 assert response.json["name"] == "Alice"

Testing Database Operations

python
1@pytest.fixture 2def db_session(): 3 """Create a test database session.""" 4 session = Session(bind=engine) 5 session.begin_nested() 6 yield session 7 session.rollback() 8 session.close() 9 10def test_create_user(db_session): 11 user = User(name="Alice", email="alice@example.com") 12 db_session.add(user) 13 db_session.commit() 14 15 retrieved = db_session.query(User).filter_by(name="Alice").first() 16 assert retrieved.email == "alice@example.com"

Testing Class Methods

python
1class TestCalculator: 2 @pytest.fixture 3 def calculator(self): 4 return Calculator() 5 6 def test_add(self, calculator): 7 assert calculator.add(2, 3) == 5 8 9 def test_divide_by_zero(self, calculator): 10 with pytest.raises(ZeroDivisionError): 11 calculator.divide(10, 0)

pytest Configuration

pytest.ini

ini
1[pytest] 2testpaths = tests 3python_files = test_*.py 4python_classes = Test* 5python_functions = test_* 6addopts = 7 --strict-markers 8 --disable-warnings 9 --cov=mypackage 10 --cov-report=term-missing 11 --cov-report=html 12markers = 13 slow: marks tests as slow 14 integration: marks tests as integration tests 15 unit: marks tests as unit tests

pyproject.toml

toml
1[tool.pytest.ini_options] 2testpaths = ["tests"] 3python_files = ["test_*.py"] 4python_classes = ["Test*"] 5python_functions = ["test_*"] 6addopts = [ 7 "--strict-markers", 8 "--cov=mypackage", 9 "--cov-report=term-missing", 10 "--cov-report=html", 11] 12markers = [ 13 "slow: marks tests as slow", 14 "integration: marks tests as integration tests", 15 "unit: marks tests as unit tests", 16]

Running Tests

bash
1# Run all tests 2pytest 3 4# Run specific file 5pytest tests/test_utils.py 6 7# Run specific test 8pytest tests/test_utils.py::test_function 9 10# Run with verbose output 11pytest -v 12 13# Run with coverage 14pytest --cov=mypackage --cov-report=html 15 16# Run only fast tests 17pytest -m "not slow" 18 19# Run until first failure 20pytest -x 21 22# Run and stop on N failures 23pytest --maxfail=3 24 25# Run last failed tests 26pytest --lf 27 28# Run tests with pattern 29pytest -k "test_user" 30 31# Run with debugger on failure 32pytest --pdb

Quick Reference

PatternUsage
pytest.raises()Test expected exceptions
@pytest.fixture()Create reusable test fixtures
@pytest.mark.parametrize()Run tests with multiple inputs
@pytest.mark.slowMark slow tests
pytest -m "not slow"Skip slow tests
@patch()Mock functions and classes
tmp_path fixtureAutomatic temp directory
pytest --covGenerate coverage report
assertSimple and readable assertions

Remember: Tests are code too. Keep them clean, readable, and maintainable. Good tests catch bugs; great tests prevent them.

Related Skills

Looking for an alternative to python-testing or building a Categories.official AI Agent? Explore these related open-source MCP Servers.

View All

flags

Logo of facebook
facebook

flags is a feature flag management system that enables developers to check flag states, compare channels, and debug feature behavior differences across release channels.

243.6k
0
Design

extract-errors

Logo of facebook
facebook

extract-errors is a skill that assists in extracting and managing error codes in React applications using yarn extract-errors command.

243.6k
0
Design

fix

Logo of facebook
facebook

fix is a technical skill that resolves lint errors, formatting issues, and ensures code quality in declarative, frontend, and UI projects

243.6k
0
Design

flow

Logo of facebook
facebook

Flow is a type checking system for JavaScript, used to validate React code and ensure consistency across applications

243.6k
0
Design