KS
Killer-Skills

testing-patterns — Categories.community

v1.0.0
GitHub

About this Skill

Perfect for Python Testing Agents needing standardized test structures and data format compatibility. Python library to read, write and convert data files with formats BSON, JSON, NDJSON, Parquet, ORC, XLS, XLSX, XML and many others

datenoio datenoio
[0]
[0]
Updated: 2/27/2026

Quality Score

Top 5%
33
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add datenoio/iterabledata

Agent Capability Analysis

The testing-patterns MCP Server by datenoio is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion.

Ideal Agent Persona

Perfect for Python Testing Agents needing standardized test structures and data format compatibility.

Core Value

Empowers agents to efficiently run tests on various data formats such as JSON, Parquet, and XML using pytest, ensuring comprehensive test coverage and validation.

Capabilities Granted for testing-patterns MCP Server

Automating tests for data-intensive applications
Generating test suites for multiple file formats
Debugging data processing pipelines with NDJSON and BSON support

! Prerequisites & Limits

  • Requires pytest installation
  • Python environment only
  • Limited to specific file formats mentioned
Project
SKILL.md
5.1 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Testing Patterns

Test Structure

File Naming

  • Test files: test_*.py in tests/ directory
  • One test file per format/feature: test_csv.py, test_parquet.py
  • Test classes: Test* (e.g., TestCSV, TestParquet)
  • Test functions: test_* (e.g., test_read, test_write)

Running Tests

bash
1# All tests 2pytest --verbose 3 4# Specific test file 5pytest tests/test_csv.py -v 6 7# Specific test function 8pytest tests/test_csv.py::TestCSV::test_read -v 9 10# Parallel execution 11pytest -n auto 12 13# With coverage 14pytest --cov=iterable --cov-report=html

Test Patterns

Basic Read Test

python
1def test_read(self): 2 with open_iterable('testdata/test.csv') as source: 3 rows = list(source) 4 assert len(rows) > 0 5 assert isinstance(rows[0], dict)

Basic Write Test

python
1def test_write(self, tmp_path): 2 output = tmp_path / 'output.csv' 3 data = [{'col1': 'val1', 'col2': 'val2'}] 4 5 with open_iterable(output, 'w') as dest: 6 dest.write_bulk(data) 7 8 # Verify written data 9 with open_iterable(output) as source: 10 rows = list(source) 11 assert rows == data

Compression Tests

python
1def test_gzip_compression(self): 2 with open_iterable('testdata/test.csv.gz') as source: 3 rows = list(source) 4 assert len(rows) > 0

Bulk Operations Test

python
1def test_read_bulk(self): 2 with open_iterable('testdata/test.csv') as source: 3 chunks = list(source.read_bulk(size=100)) 4 assert len(chunks) > 0 5 assert all(isinstance(chunk, list) for chunk in chunks)

Edge Cases

python
1def test_empty_file(self): 2 with open_iterable('testdata/empty.csv') as source: 3 rows = list(source) 4 assert rows == [] 5 6def test_malformed_data(self): 7 with pytest.raises(ValueError): 8 with open_iterable('testdata/malformed.csv') as source: 9 list(source)

Missing Dependencies

python
1@pytest.mark.skipif( 2 not HAS_OPTIONAL_DEPENDENCY, 3 reason="Optional dependency not installed" 4) 5def test_optional_format(self): 6 # Test format that requires optional dependency 7 pass

Test Data

  • Store test files in testdata/ directory
  • Use descriptive names: test_simple.csv, test_nested.json
  • Include edge cases: empty files, malformed data
  • Test with various encodings for text formats
  • Test with compression: .gz, .bz2, .zst, etc.

Test Coverage

Required Coverage

  • All public methods
  • Error handling paths
  • Edge cases (empty files, malformed data)
  • Compression variants
  • Encoding variants (for text formats)

Coverage Report

bash
1pytest --cov=iterable --cov-report=html 2# Opens htmlcov/index.html

Test Organization

Class-Based Tests

python
1class TestCSV: 2 def test_read(self): 3 pass 4 5 def test_write(self): 6 pass 7 8 def test_read_bulk(self): 9 pass

Fixtures

Use pytest fixtures for common setup:

python
1@pytest.fixture 2def sample_data(): 3 return [{'col1': 'val1', 'col2': 'val2'}] 4 5def test_write_with_fixture(sample_data, tmp_path): 6 output = tmp_path / 'output.csv' 7 with open_iterable(output, 'w') as dest: 8 dest.write_bulk(sample_data)

Python Version Support

Tests should pass for:

  • Python 3.10
  • Python 3.11
  • Python 3.12

Use pytest markers if version-specific behavior needed:

python
1@pytest.mark.skipif( 2 sys.version_info < (3, 11), 3 reason="Requires Python 3.11+" 4)

Best Practices

  1. Always use context managers: with open_iterable(...) as source:
  2. Use temporary directories: tmp_path fixture for write tests
  3. Test both read and write: Verify round-trip when possible
  4. Test compression: Include compressed variants
  5. Test edge cases: Empty files, single row, large files
  6. Skip optional dependencies: Use @pytest.mark.skipif appropriately
  7. Clear assertions: Use descriptive assert messages
  8. Isolated tests: Each test should be independent

Common Test Patterns

Round-Trip Test

python
1def test_round_trip(self, tmp_path): 2 original = [{'a': 1, 'b': 2}, {'a': 3, 'b': 4}] 3 output = tmp_path / 'output.csv' 4 5 # Write 6 with open_iterable(output, 'w') as dest: 7 dest.write_bulk(original) 8 9 # Read back 10 with open_iterable(output) as source: 11 result = list(source) 12 13 assert result == original

Streaming Test

python
1def test_streaming(self): 2 with open_iterable('large_file.csv') as source: 3 count = 0 4 for row in source: 5 count += 1 6 if count >= 100: 7 break 8 assert count == 100

Debugging Tests

Verbose Output

bash
1pytest -vv # Very verbose 2pytest -s # Show print statements

Run Last Failed

bash
1pytest --lf # Last failed 2pytest --ff # Failed first

Debug Specific Test

bash
1pytest tests/test_csv.py::TestCSV::test_read -v -s

Related Skills

Looking for an alternative to testing-patterns or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication