run-all-checks — how to use run-all-checks how to use run-all-checks, run-all-checks setup guide, run-all-checks vs manual checking, install run-all-checks, run-all-checks for Python projects, automated project validation with run-all-checks, run-all-checks tutorial, run-all-checks best practices

v1.0.0
GitHub

About this Skill

Perfect for Code Review Agents needing comprehensive project validation and automated error detection. run-all-checks is a skill that automates the execution of various project checks, including ruff, mypy, pytest, and Sphinx docs, to identify and fix errors.

Features

Executes ruff for code quality checks
Runs mypy for static type checking
Integrates pytest for unit testing
Generates and checks Sphinx documentation
Supports parallel execution with the --parallel option
Provides a script-based approach for ease of use

# Core Topics

SETI SETI
[0]
[0]
Updated: 3/10/2026

Quality Score

Top 5%
36
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add SETI/rms-cloud-tasks/run-all-checks
Supports 18+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The run-all-checks MCP Server by SETI is an open-source Community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use run-all-checks, run-all-checks setup guide, run-all-checks vs manual checking.

Ideal Agent Persona

Perfect for Code Review Agents needing comprehensive project validation and automated error detection.

Core Value

Empowers agents to streamline error detection and fixing by executing checks like ruff, mypy, pytest, and Sphinx docs, providing a robust project validation workflow using Python and shell scripts.

Capabilities Granted for run-all-checks MCP Server

Automating project validation with ruff and mypy
Debugging code issues with pytest
Generating and validating Sphinx documentation

! Prerequisites & Limits

  • Requires Python environment with necessary libraries installed
  • Limited to projects using ruff, mypy, pytest, and Sphinx
Project
SKILL.md
4.5 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Run All Checks

Execute all project checks (ruff, mypy, pytest, Sphinx docs) and fix any errors found.

Quick Start

Run all checks and fix errors:

  1. Execute the run-all-checks script (or run checks manually).
  2. Review output for errors and warnings.
  3. Fix any issues found.
  4. Re-run checks to verify fixes.

Preferred Method: Script

From project root with venv activated:

bash
1./scripts/run-all-checks.sh

Options:

  • -p, --parallel — Run code checks and docs in parallel (default).
  • -s, --sequential — Run all checks one after another (easier to debug).
  • -c, --code — Only ruff, mypy, pytest.
  • -d, --docs — Only Sphinx documentation build.
  • -h, --help — Show usage.

Check Commands (Manual)

All commands assume project root and source venv/bin/activate (or venv\Scripts\activate on Windows).

Code (from project root)

bash
1# Lint (ruff) 2python -m ruff check src tests examples 3python -m ruff format --check src tests examples 4 5# Type check (mypy) 6python -m mypy src tests examples 7 8# Tests (pytest) 9python -m pytest tests -q

Documentation (from project root)

bash
1cd docs && make clean && make html SPHINXOPTS="-W"

The script and docs build use SPHINXOPTS="-W" so Sphinx treats warnings as errors; the docs check fails if any warnings are produced.

Execution Workflow

Copy this checklist and track progress:

text
1Check Progress: 2- [ ] Ruff check (src, tests, examples) 3- [ ] Ruff format --check 4- [ ] Mypy (src, tests, examples) 5- [ ] Pytest (tests) 6- [ ] Docs build without warnings 7- [ ] All errors fixed 8- [ ] Re-verify all checks pass

Step 1: Run All Checks

Option A – Script (recommended):

bash
1./scripts/run-all-checks.sh

Option B – Sequential manual:

bash
1source venv/bin/activate 2python -m ruff check src tests examples && \ 3python -m ruff format --check src tests examples && \ 4python -m mypy src tests examples && \ 5python -m pytest tests -q && \ 6(cd docs && make clean && make html SPHINXOPTS="-W")

Step 2: Analyze Results

Check output for:

  • Errors: Must be fixed (non-zero exit code).
  • Warnings: Must be fixed. The docs build is run with SPHINXOPTS="-W", so Sphinx warnings cause the documentation check to fail.

Common error types:

CheckError PatternTypical Fix
ruffF401 unused importRemove import
ruffUP035 typing importUse collections.abc for ABCs
ruffARG001 unused argumentPrefix with _ or # noqa: ARG001
mypyerror: ... is not definedAdd import or fix typo
mypyassignment, attr-definedAdd type annotations or # type: ignore[...]
pytestFAILED or ERRORFix test or code under test
sphinxWARNING: duplicate objectAdd :no-index: or fix duplicate

Step 3: Fix Issues

For each error:

  1. Read the error message and file/line.
  2. Open the file and apply the appropriate fix.
  3. Re-run the failing check to confirm.

Step 4: Re-verify

After fixing, run all checks again:

bash
1./scripts/run-all-checks.sh

All checks should pass with exit code 0.

Common Fixes Reference

Ruff Unused Argument (ARG001)

For pytest fixtures that are dependencies but not directly used:

python
1def my_fixture(other_fixture: None) -> None: # noqa: ARG001 2 ...

Ruff UP035 (typing → collections.abc)

Use collections.abc for AsyncGenerator, Iterable, etc.:

python
1from collections.abc import AsyncGenerator, Iterable

Sphinx Duplicate Object Warning

Add :no-index: to automodule directive:

rst
1.. automodule:: cloud_tasks.config 2 :members: 3 :no-index:

Mypy in Tests

Tests use overrides in pyproject.toml (module = "tests.*") for method-assign and attr-defined (mocks). For other mypy errors in tests, add targeted # type: ignore[code] or fix the type.

Type Annotation Errors

For union or forward-reference issues:

python
1from __future__ import annotations # Add at top of file

Success Criteria

All checks pass when:

  • ruff check src tests examples → "All checks passed!"
  • ruff format --check src tests examples → "All files formatted"
  • mypy src tests examples → "Success: no issues found"
  • pytest tests -q → All tests pass
  • make html SPHINXOPTS="-W" (in docs/) → Build completes with exit 0 (no errors or warnings)

Related Skills

Looking for an alternative to run-all-checks or building a Community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

testing

Logo of lobehub
lobehub

Testing is a process for verifying AI agent functionality using commands like bunx vitest run and optimizing workflows with targeted test runs.

73.3k
0
Communication

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication