quality-scan
<task> Your task is to perform comprehensive quality scans across the socket-sdk-js codebase using specialized agents to identify critical bugs, logic errors, and workflow problems. Before scanning, update dependencies and clean up junk files to ensure a clean and organized repository. Generate a prioritized report with actionable improvement tasks. </task> <context> **What is Quality Scanning?** Quality scanning uses specialized AI agents to systematically analyze code for different categories of issues. Each agent type focuses on specific problem domains and reports findings with severity levels and actionable fixes.socket-sdk-js Architecture: This is Socket Security's TypeScript/JavaScript SDK that:
- Provides programmatic access to Socket.dev security analysis
- Implements HTTP client with retry logic and rate limiting
- Handles API authentication and request/response validation
- Generates strict TypeScript types from OpenAPI specifications
- Supports package scanning, SBOM generation, and organization management
- Implements comprehensive test coverage with Vitest
Scan Types Available:
- critical - Crashes, security vulnerabilities, resource leaks, data corruption
- logic - Algorithm errors, edge cases, type guards, off-by-one errors
- workflow - Build scripts, CI issues, cross-platform compatibility
- security - GitHub Actions workflow security (zizmor scanner)
- documentation - README accuracy, outdated docs, missing documentation
Why Quality Scanning Matters:
- Catches bugs before they reach production
- Identifies security vulnerabilities early
- Improves code quality systematically
- Provides actionable fixes with file:line references
- Prioritizes issues by severity for efficient remediation
- Keeps dependencies up-to-date
- Cleans up junk files for a well-organized repository
Agent Prompts:
All agent prompts are embedded in reference.md with structured <context>, <instructions>, <pattern>, and <output_format> tags following Claude best practices.
</context>
Do NOT:
- Fix issues during scan (analysis only - report findings)
- Skip critical scan types without user permission
- Report findings without file/line references
- Proceed if codebase has uncommitted changes (warn but continue)
Do ONLY:
- Update dependencies before scanning
- Run enabled scan types in priority order (critical → logic → workflow)
- Generate structured findings with severity levels
- Provide actionable improvement tasks with specific code changes
- Report statistics and coverage metrics
- Deduplicate findings across scans </constraints>
Process
Execute the following phases sequentially to perform comprehensive quality analysis.
Phase 1: Validate Environment
<prerequisites> Verify the environment before starting scans: </prerequisites><validation> **Expected State:** - Working directory should be clean (warn if dirty but continue) - On a valid branch - Node modules installedbash1git status
If working directory dirty:
- Warn user: "Working directory has uncommitted changes - continuing with scan"
- Continue with scans (quality scanning is read-only)
Phase 2: Update Dependencies
<action> Update dependencies across Socket Security SDK repositories to ensure latest versions: </action>Target Repositories:
- socket-sdk-js (current repository)
- socket-cli (
../socket-cli/) - socket-btm (
../socket-btm/) - socket-registry (
../socket-registry/)
Update Process:
For each repository, run dependency updates:
<validation> **For each repository:** 1. Check if directory exists (skip if not found) 2. Run `pnpm run update` command 3. Report success or failure 4. Track updated packages count 5. Continue even if some repos failbash1# socket-sdk-js (current repo) 2pnpm run update 3 4# socket-cli 5cd ../socket-cli && pnpm run update && cd - 6 7# socket-btm 8cd ../socket-btm && pnpm run update && cd - 9 10# socket-registry 11cd ../socket-registry && pnpm run update && cd -
Expected Results:
- Dependencies updated in available repositories
- Report number of packages updated per repository
- Note any repositories that were skipped (not found)
- Continue with scan even if updates fail
Track for reporting:
- Repositories updated: N/4
- Total packages updated: N
- Failed updates: N (continue with warnings)
- Skipped repositories: [list]
Phase 3: Repository Cleanup
<action> Clean up junk files and organize the repository before scanning: </action>Cleanup Tasks:
-
Remove SCREAMING_TEXT.md files (all-caps .md files) that are NOT:
- Inside
.claude/directory - Inside
docs/directory - Named
README.md,LICENSE, orSECURITY.md
- Inside
-
Remove temporary test files in wrong locations:
.test.mjsor.test.mtsfiles outsidetest/or__tests__/directories- Temp files:
*.tmp,*.temp,.DS_Store,Thumbs.db - Editor backups:
*~,*.swp,*.swo,*.bak - Test artifacts:
*.logfiles in root or package directories (not logs/)
<validation> **For each file found:** 1. Show the file path to user 2. Explain why it's considered junk 3. Ask user for confirmation before deleting (use AskUserQuestion) 4. Delete confirmed files: `git rm` if tracked, `rm` if untracked 5. Report files removedbash1# Find SCREAMING_TEXT.md files (all caps with .md extension) 2find . -type f -name '*.md' \ 3 ! -path './.claude/*' \ 4 ! -path './docs/*' \ 5 ! -name 'README.md' \ 6 ! -name 'LICENSE' \ 7 ! -name 'SECURITY.md' \ 8 | grep -E '/[A-Z_]+\.md$' 9 10# Find test files in wrong locations 11find . -type f \( -name '*.test.mjs' -o -name '*.test.mts' \) \ 12 ! -path '*/test/*' \ 13 ! -path '*/__tests__/*' \ 14 ! -path '*/node_modules/*' 15 16# Find temp files 17find . -type f \( \ 18 -name '*.tmp' -o \ 19 -name '*.temp' -o \ 20 -name '.DS_Store' -o \ 21 -name 'Thumbs.db' -o \ 22 -name '*~' -o \ 23 -name '*.swp' -o \ 24 -name '*.swo' -o \ 25 -name '*.bak' \ 26\) ! -path '*/node_modules/*' 27 28# Find log files in wrong places (not in logs/ or build/ directories) 29find . -type f -name '*.log' \ 30 ! -path '*/logs/*' \ 31 ! -path '*/build/*' \ 32 ! -path '*/node_modules/*' \ 33 ! -path '*/.git/*'
If no junk files found:
- Report: "✓ Repository is clean - no junk files found"
Important:
- Always get user confirmation before deleting
- Show file contents if user is unsure
- Track deleted files for reporting
Phase 4: Determine Scan Scope
<action> Ask user which scans to run: </action>Default Scan Types (run all unless user specifies):
- critical - Critical bugs (crashes, security, resource leaks)
- logic - Logic errors (algorithms, edge cases, type guards)
- workflow - Workflow problems (scripts, CI, git hooks)
- security - GitHub Actions security (template injection, cache poisoning, etc.)
- documentation - Documentation accuracy (README errors, outdated docs)
User Interaction: Use AskUserQuestion tool:
- Question: "Which quality scans would you like to run?"
- Header: "Scan Types"
- multiSelect: true
- Options:
- "All scans (recommended)" → Run all scan types
- "Critical only" → Run critical scan only
- "Critical + Logic" → Run critical and logic scans
- "Custom selection" → Ask user to specify which scans
Default: If user doesn't specify, run all scans.
<validation> Validate selected scan types exist in reference.md: - critical-scan → reference.md line ~5 - logic-scan → reference.md line ~100 - workflow-scan → reference.md line ~300 - security-scan → reference.md line ~400 - documentation-scan → reference.md line ~810If user requests non-existent scan type, report error and suggest valid types. </validation>
Phase 5: Execute Scans
<action> For each enabled scan type, spawn a specialized agent using Task tool: </action>typescript1// Example: Critical scan 2Task({ 3 subagent_type: "general-purpose", 4 description: "Critical bugs scan", 5 prompt: `${CRITICAL_SCAN_PROMPT_FROM_REFERENCE_MD} 6 7Focus on src/ directory (HTTP client, SDK class, type generation scripts). 8 9SDK-specific patterns to check: 10- HTTP client error handling (src/http-client.ts) 11- API method validation (src/socket-sdk-class.ts) 12- Type generation scripts (scripts/generate-*.mjs) 13- Promise handling and retry logic 14- JSON parsing errors 15- Rate limiting and timeout handling 16 17Report findings in this format: 18- File: path/to/file.ts:lineNumber 19- Issue: Brief description 20- Severity: Critical/High/Medium/Low 21- Pattern: Code snippet 22- Trigger: What input triggers this 23- Fix: Suggested fix 24- Impact: What happens if triggered 25 26Scan systematically and report all findings. If no issues found, state that explicitly.` 27})
For each scan:
- Load agent prompt template from
reference.md - Customize for socket-sdk-js context (focus on src/, scripts/, test/)
- Spawn agent with Task tool using "general-purpose" subagent_type
- Capture findings from agent response
- Parse and categorize results
Execution Order: Run scans sequentially in priority order:
- critical (highest priority)
- logic
- workflow
- security
- documentation (lowest priority)
Agent Prompt Sources:
- Critical scan: reference.md starting at line ~12
- Logic scan: reference.md starting at line ~100
- Workflow scan: reference.md starting at line ~300
- Security scan: reference.md starting at line ~400
- Documentation scan: reference.md starting at line ~810
After each agent returns, validate output structure before parsing:
bash1# 1. Verify agent completed successfully 2if [ -z "" ]; then 3 echo "ERROR: Agent returned no output" 4 exit 1 5fi 6 7# 2. Check for findings or clean report 8if ! echo "" | grep -qE '(File:.*Issue:|No .* issues found|✓ Clean)'; then 9 echo "WARNING: Agent output missing expected format" 10 echo "Agent may have encountered an error or found no issues" 11fi 12 13# 3. Verify severity levels if findings exist 14if echo "" | grep -q "File:"; then 15 if ! echo "" | grep -qE 'Severity: (Critical|High|Medium|Low)'; then 16 echo "WARNING: Findings missing severity classification" 17 fi 18fi 19 20# 4. Verify fix suggestions if findings exist 21if echo "" | grep -q "File:"; then 22 if ! echo "" | grep -q "Fix:"; then 23 echo "WARNING: Findings missing suggested fixes" 24 fi 25fi
Manual Verification Checklist:
- Agent output includes findings OR explicit "No issues found" statement
- All findings include file:line references
- All findings include severity level (Critical/High/Medium/Low)
- All findings include suggested fixes
- Agent output is parseable and structured
For each scan completion:
- Verify agent completed without errors
- Extract findings from agent output (or confirm "No issues found")
- Parse into structured format (file, issue, severity, fix)
- Track scan coverage (files analyzed)
- Log any validation warnings for debugging </validation>
Phase 6: Aggregate Findings
<action> Collect all findings from agents and aggregate: </action>typescript1interface Finding { 2 file: string // "src/http-client.ts:89" 3 issue: string // "Potential null pointer access" 4 severity: "Critical" | "High" | "Medium" | "Low" 5 scanType: string // "critical" 6 pattern: string // Code snippet showing the issue 7 trigger: string // What causes this issue 8 fix: string // Suggested code change 9 impact: string // What happens if triggered 10}
Deduplication:
- Remove duplicate findings across scans (same file:line, same issue)
- Keep the finding from the highest priority scan
- Track which scans found the same issue
Prioritization:
- Sort by severity: Critical → High → Medium → Low
- Within same severity, sort by scanType priority
- Within same severity+scanType, sort alphabetically by file path
Phase 7: Generate Report
<action> Create structured quality report with all findings: </action>markdown1# Quality Scan Report 2 3**Date:** YYYY-MM-DD 4**Repository:** socket-sdk-js 5**Scans:** [list of scan types run] 6**Files Scanned:** N 7**Findings:** N critical, N high, N medium, N low 8 9## Critical Issues (Priority 1) - N found 10 11### src/http-client.ts:89 12- **Issue**: Potential null pointer access in retry logic 13- **Pattern**: `const result = response.data.items[0]` 14- **Trigger**: When API returns empty array 15- **Fix**: `const items = response.data?.items ?? []; if (items.length === 0) throw new Error('No items found'); const result = items[0]` 16- **Impact**: Crashes SDK, breaks user applications 17- **Scan**: critical 18 19## High Issues (Priority 2) - N found 20 21[Similar format for high severity issues] 22 23## Medium Issues (Priority 3) - N found 24 25[Similar format for medium severity issues] 26 27## Low Issues (Priority 4) - N found 28 29[Similar format for low severity issues] 30 31## Scan Coverage 32 33- **Critical scan**: N files analyzed in src/, scripts/ 34- **Logic scan**: N files analyzed (API methods, type generation) 35- **Workflow scan**: N files analyzed (package.json, scripts/, .github/) 36 37## Recommendations 38 391. Address N critical issues immediately before next release 402. Review N high-severity logic errors in HTTP client 413. Schedule N medium issues for next sprint 424. Low-priority items can be addressed during refactoring 43 44## No Findings 45 46[If a scan found no issues, list it here:] 47- Critical scan: ✓ Clean 48- Logic scan: ✓ Clean
Output Report:
- Display report to console (user sees it)
- Offer to save to file (optional):
reports/quality-scan-YYYY-MM-DD.md
Phase 8: Complete
<completion_signal>
xml1<promise>QUALITY_SCAN_COMPLETE</promise>
</completion_signal>
<summary> Report these final metrics to the user:Quality Scan Complete
✓ Dependency updates: N repositories, N packages updated ✓ Repository cleanup: N junk files removed ✓ Scans completed: [list of scan types] ✓ Total findings: N (N critical, N high, N medium, N low) ✓ Files scanned: N ✓ Report generated: Yes ✓ Scan duration: [calculated from start to end]
Dependency Update Summary:
- socket-sdk-js: N packages updated
- socket-cli: N packages updated (or skipped)
- socket-btm: N packages updated (or skipped)
- socket-registry: N packages updated (or skipped)
Repository Cleanup Summary:
- SCREAMING_TEXT.md files removed: N
- Temporary test files removed: N
- Temp/backup files removed: N
- Log files cleaned up: N
Critical Issues Requiring Immediate Attention:
- N critical issues found
- Review report above for details and fixes
Next Steps:
- Address critical issues immediately
- Review high-severity findings
- Schedule medium/low issues appropriately
- Re-run scans after fixes to verify
All findings include file:line references and suggested fixes.
</summary> </instructions>Success Criteria
- ✅
<promise>QUALITY_SCAN_COMPLETE</promise>output - ✅ Dependencies updated in available repositories
- ✅ All enabled scans completed without errors
- ✅ Findings prioritized by severity (Critical → Low)
- ✅ All findings include file:line references
- ✅ Actionable suggestions provided for all findings
- ✅ Report generated with statistics and coverage metrics
- ✅ Duplicate findings removed
Scan Types
See reference.md for detailed agent prompts with structured tags:
- critical-scan - Null access, promise rejections, race conditions, resource leaks
- logic-scan - Off-by-one errors, type guards, edge cases, algorithm correctness
- workflow-scan - Scripts, package.json, git hooks, CI configuration
- security-scan - GitHub Actions workflow security (runs zizmor scanner)
- documentation-scan - README accuracy, outdated examples, incorrect package names, missing documentation
All agent prompts follow Claude best practices with <context>, <instructions>, <pattern>, <output_format>, and <quality_guidelines> tags.
Commands
This skill is self-contained. No external commands needed.
Context
This skill provides systematic code quality analysis for socket-sdk-js by:
- Updating dependencies before scanning to ensure latest versions
- Spawning specialized agents for targeted analysis
- Using Task tool to run agents autonomously
- Embedding agent prompts in reference.md following best practices
- Generating prioritized, actionable reports
- Supporting partial scans (user can select specific scan types)
For detailed agent prompts with best practices structure, see reference.md.