本地化技能摘要: An operating system for autonomous research — from literature to manuscript inside a governed, checkpointed loop. It covers ai-agents, autonomous-research, checkpointing workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.
Memory Checkpoint is an AI agent skill that streamlines biomedical literature research by providing a professional MCP server with 40 tools, multi-source search, and full-text access, benefiting developers and researchers.
Review checkpoint specs and tests to identify tests that encode ambiguous interpretations rather than explicit requirements. Use when asked to check checkpoint_N.md against test_checkpoint_N.py, when auditing tests for ambiguity, or when reviewing snapshot eval failures for interpretive issues.
Run after completing significant work — implementing a feature, fixing a bug, refactoring, or any substantial code changes. Proactively ensure code quality before reporting work as done.
Plan code refactors by defining goals/non-goals, mapping dependencies, sequencing phases, and specifying verification and rollback checkpoints; triggers: refactor plan/restructure/rename/move modules.
Debug preprocessing pipeline failures. Guides through reading checkpoint files, checking step artifacts, interpreting QC metrics, examining visualization PNGs, and identifying which step failed and why. Use when a preprocessing run produces unexpected results, crashes, or generates poor-quality outputs.
To diagnose active, stuck, or failed Kilroy Attractor runs, inspect run artifacts (`manifest.json`, `live.json`, `checkpoint.json`, `final.json`, `progress.ndjson`), resolve run IDs/log roots, identify model/provider routing, and isolate failure causes. Includes CXDB operations for launching/probing CXDB, opening the CXDB UI, and querying run context turns. This skill is useful when investigating run status, debugging retries/failures, explaining model usage, or inspecting CXDB-backed event history.