testing-patterns — community testing-patterns, iterabledata, community, ide skills

v1.0.0

About this Skill

Perfect for Python Testing Agents needing structured test organization and execution capabilities. Testing conventions, patterns, and best practices for IterableData. Use when writing tests, debugging test failures, or establishing test coverage.

datenoio datenoio
[0]
[0]
Updated: 3/12/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 7/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Locale and body language aligned
Review Score
7/11
Quality Score
33
Canonical Locale
en
Detected Body Locale
en

Perfect for Python Testing Agents needing structured test organization and execution capabilities. Testing conventions, patterns, and best practices for IterableData. Use when writing tests, debugging test failures, or establishing test coverage.

Core Value

Empowers agents to efficiently run tests using pytest, supporting various test structures and file formats like CSV and Parquet, with precise control over test execution and validation.

Ideal Agent Persona

Perfect for Python Testing Agents needing structured test organization and execution capabilities.

Capabilities Granted for testing-patterns

Automating test suites for data-intensive applications
Generating test reports for CSV and Parquet files
Debugging test failures using verbose pytest output

! Prerequisites & Limits

  • Requires pytest installation
  • Python environment needed
  • Limited to testing patterns for CSV and Parquet files

Why this page is reference-only

  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is testing-patterns?

Perfect for Python Testing Agents needing structured test organization and execution capabilities. Testing conventions, patterns, and best practices for IterableData. Use when writing tests, debugging test failures, or establishing test coverage.

How do I install testing-patterns?

Run the command: npx killer-skills add datenoio/iterabledata/testing-patterns. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for testing-patterns?

Key use cases include: Automating test suites for data-intensive applications, Generating test reports for CSV and Parquet files, Debugging test failures using verbose pytest output.

Which IDEs are compatible with testing-patterns?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for testing-patterns?

Requires pytest installation. Python environment needed. Limited to testing patterns for CSV and Parquet files.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add datenoio/iterabledata/testing-patterns. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use testing-patterns immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

testing-patterns

Install testing-patterns, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Testing Patterns

Test Structure

File Naming

  • Test files: test_*.py in tests/ directory
  • One test file per format/feature: test_csv.py, test_parquet.py
  • Test classes: Test* (e.g., TestCSV, TestParquet)
  • Test functions: test_* (e.g., test_read, test_write)

Running Tests

bash
1# All tests 2pytest --verbose 3 4# Specific test file 5pytest tests/test_csv.py -v 6 7# Specific test function 8pytest tests/test_csv.py::TestCSV::test_read -v 9 10# Parallel execution 11pytest -n auto 12 13# With coverage 14pytest --cov=iterable --cov-report=html

Test Patterns

Basic Read Test

python
1def test_read(self): 2 with open_iterable('testdata/test.csv') as source: 3 rows = list(source) 4 assert len(rows) > 0 5 assert isinstance(rows[0], dict)

Basic Write Test

python
1def test_write(self, tmp_path): 2 output = tmp_path / 'output.csv' 3 data = [{'col1': 'val1', 'col2': 'val2'}] 4 5 with open_iterable(output, 'w') as dest: 6 dest.write_bulk(data) 7 8 # Verify written data 9 with open_iterable(output) as source: 10 rows = list(source) 11 assert rows == data

Compression Tests

python
1def test_gzip_compression(self): 2 with open_iterable('testdata/test.csv.gz') as source: 3 rows = list(source) 4 assert len(rows) > 0

Bulk Operations Test

python
1def test_read_bulk(self): 2 with open_iterable('testdata/test.csv') as source: 3 chunks = list(source.read_bulk(size=100)) 4 assert len(chunks) > 0 5 assert all(isinstance(chunk, list) for chunk in chunks)

Edge Cases

python
1def test_empty_file(self): 2 with open_iterable('testdata/empty.csv') as source: 3 rows = list(source) 4 assert rows == [] 5 6def test_malformed_data(self): 7 with pytest.raises(ValueError): 8 with open_iterable('testdata/malformed.csv') as source: 9 list(source)

Missing Dependencies

python
1@pytest.mark.skipif( 2 not HAS_OPTIONAL_DEPENDENCY, 3 reason="Optional dependency not installed" 4) 5def test_optional_format(self): 6 # Test format that requires optional dependency 7 pass

Test Data

  • Store test files in testdata/ directory
  • Use descriptive names: test_simple.csv, test_nested.json
  • Include edge cases: empty files, malformed data
  • Test with various encodings for text formats
  • Test with compression: .gz, .bz2, .zst, etc.

Test Coverage

Required Coverage

  • All public methods
  • Error handling paths
  • Edge cases (empty files, malformed data)
  • Compression variants
  • Encoding variants (for text formats)

Coverage Report

bash
1pytest --cov=iterable --cov-report=html 2# Opens htmlcov/index.html

Test Organization

Class-Based Tests

python
1class TestCSV: 2 def test_read(self): 3 pass 4 5 def test_write(self): 6 pass 7 8 def test_read_bulk(self): 9 pass

Fixtures

Use pytest fixtures for common setup:

python
1@pytest.fixture 2def sample_data(): 3 return [{'col1': 'val1', 'col2': 'val2'}] 4 5def test_write_with_fixture(sample_data, tmp_path): 6 output = tmp_path / 'output.csv' 7 with open_iterable(output, 'w') as dest: 8 dest.write_bulk(sample_data)

Python Version Support

Tests should pass for:

  • Python 3.10
  • Python 3.11
  • Python 3.12

Use pytest markers if version-specific behavior needed:

python
1@pytest.mark.skipif( 2 sys.version_info < (3, 11), 3 reason="Requires Python 3.11+" 4)

Best Practices

  1. Always use context managers: with open_iterable(...) as source:
  2. Use temporary directories: tmp_path fixture for write tests
  3. Test both read and write: Verify round-trip when possible
  4. Test compression: Include compressed variants
  5. Test edge cases: Empty files, single row, large files
  6. Skip optional dependencies: Use @pytest.mark.skipif appropriately
  7. Clear assertions: Use descriptive assert messages
  8. Isolated tests: Each test should be independent

Common Test Patterns

Round-Trip Test

python
1def test_round_trip(self, tmp_path): 2 original = [{'a': 1, 'b': 2}, {'a': 3, 'b': 4}] 3 output = tmp_path / 'output.csv' 4 5 # Write 6 with open_iterable(output, 'w') as dest: 7 dest.write_bulk(original) 8 9 # Read back 10 with open_iterable(output) as source: 11 result = list(source) 12 13 assert result == original

Streaming Test

python
1def test_streaming(self): 2 with open_iterable('large_file.csv') as source: 3 count = 0 4 for row in source: 5 count += 1 6 if count >= 100: 7 break 8 assert count == 100

Debugging Tests

Verbose Output

bash
1pytest -vv # Very verbose 2pytest -s # Show print statements

Run Last Failed

bash
1pytest --lf # Last failed 2pytest --ff # Failed first

Debug Specific Test

bash
1pytest tests/test_csv.py::TestCSV::test_read -v -s

Related Skills

Looking for an alternative to testing-patterns or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer