bk-analyze — community bk-analyze, community, ide skills

v1.0.0

关于此技能

非常适合需要高级遗留代码转换和正式规格生成能力的代码分析代理。 Reverse engineer brownfield code to specifications. Extract business rules, interfaces, and behaviors from existing implementations to generate BK-compatible specs.

dikini dikini
[0]
[0]
更新于: 3/5/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 6/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
6/11
Quality Score
57
Canonical Locale
en
Detected Body Locale
en

非常适合需要高级遗留代码转换和正式规格生成能力的代码分析代理。 Reverse engineer brownfield code to specifications. Extract business rules, interfaces, and behaviors from existing implementations to generate BK-compatible specs.

核心价值

赋予代理将现有代码转换为正式规格的能力,实现遗留系统的增量式现代化和合规文档,使用BK开发工作流集成和重构协议。

适用 Agent 类型

非常适合需要高级遗留代码转换和正式规格生成能力的代码分析代理。

赋予的主要能力 · bk-analyze

将不熟悉的ブラウンフィールドコードベース转换为正式规格
从现有的レガシーコード生成コンプライアンス文档
为リファクタリングまたはマイグレーション做准备レガシーシステム

! 使用限制与门槛

  • 需要现有的代码库进行分析
  • 仅限于ブラウンフィールド代码库
  • 需要与BK开发工作流集成

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The page lacks a strong recommendation layer.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

评审后的下一步

先决定动作,再继续看上游仓库材料

Killer-Skills 的主价值不应该停在“帮你打开仓库说明”,而是先帮你判断这项技能是否值得安装、是否应该回到可信集合复核,以及是否已经进入工作流落地阶段。

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

bk-analyze 是什么?

非常适合需要高级遗留代码转换和正式规格生成能力的代码分析代理。 Reverse engineer brownfield code to specifications. Extract business rules, interfaces, and behaviors from existing implementations to generate BK-compatible specs.

如何安装 bk-analyze?

运行命令:npx killer-skills add dikini/knot/bk-analyze。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

bk-analyze 适用于哪些场景?

典型场景包括:将不熟悉的ブラウンフィールドコードベース转换为正式规格、从现有的レガシーコード生成コンプライアンス文档、为リファクタリングまたはマイグレーション做准备レガシーシステム。

bk-analyze 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

bk-analyze 有哪些限制?

需要现有的代码库进行分析;仅限于ブラウンフィールド代码库;需要与BK开发工作流集成。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add dikini/knot/bk-analyze。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    bk-analyze 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

bk-analyze

安装 bk-analyze,这是一款面向AI agent workflows and automation的 AI Agent Skill。查看评审结论、使用场景与安装路径。

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

bk-analyze: Brownfield Code Analysis

Purpose

Transform existing code into formal specifications that integrate with the BK development workflow. Enables incremental modernization, refactoring, and compliance documentation of legacy systems.

When to Use

  • Starting work on an unfamiliar brownfield codebase
  • Documenting legacy systems that lack specifications
  • Preparing for refactoring or migration
  • Generating compliance documentation from existing code
  • Bridging knowledge gaps in maintained systems

When NOT to Use

  • Greenfield development (use bk-explorebk-design instead)
  • Codebases already have comprehensive specs (use bk-verify)
  • Simple bug fixes (use bk-debugbk-tdd)

Inputs

yaml
1codebase_path: path # Required: Root path to analyze 2component_name: string # Required: Logical name for extracted component 3depth: enum # Optional: surface | standard | deep (default: standard) 4entry_points: list # Optional: Specific files/modules to focus analysis 5exclude_patterns: list # Optional: Patterns to exclude (e.g., ["tests/", "vendor/"])

Discovery Conventions

This skill relies on filesystem discovery rather than status fields. Downstream skills locate specs via:

docs/specs/extracted/*.md    # Machine-generated specs
docs/specs/component/*.md    # Human-designed components
docs/specs/interface/*.md    # Human-designed interfaces

The spec-map.md registry tracks canonical component metadata and must stay aligned with component spec headers:

yaml
1spec_id: string # e.g., AUTH-001 2source: enum # extracted | designed 3path: string # relative to docs/specs/ 4concerns: list # [SEC, REL, CAP, OBS, ...] 5status: string # exact component-spec status for designed specs

Workflow

1. Discovery Phase

Scan the codebase to understand structure:

Structural mapping:

  • Identify modules, crates, packages
  • Map public APIs (functions, traits, structs)
  • Find entry points and external boundaries
  • Locate test files for behavior inference

Source analysis:

  • Language detection (Rust, Python, Go, etc.)
  • Framework identification
  • Dependency graph (internal and external)

Output artifact: docs/analysis/<component>-structure.md (optional, for deep analysis)

2. Extraction Phase

Extract specifications from code:

Functional requirements (FR-*)

  • Public methods → interface requirements
  • Business logic → behavioral requirements
  • Configuration → operational requirements

Interfaces (IF-*)

  • Function signatures
  • Data structures / DTOs
  • Error types and handling patterns

Cross-cutting concerns

  • Security boundaries → SEC
  • Retry/circuit breaker → REL
  • Metrics/logging → OBS
  • Rate limits/backpressure → CAP

Inference heuristics:

fn validate_token(&self, token: &str) -> Result<Claims, AuthError>
↓
FR-1: Validate authentication token
IF-1: Input: token string, Output: Claims or AuthError
SEC-1: Authentication boundary

#[test]
fn test_expired_token_rejected()
↓
FR-2: Reject expired tokens
Acceptance: Expired tokens return AuthError::Expired

3. Synthesis Phase

Generate formal BK specification:

File: docs/specs/extracted/<component>-<nnn>.md

Template:

markdown
1# <Component Name> 2 3## Metadata 4- ID: `<SCOPE>-<COMPONENT>-<NNN>` (e.g., COMP-AUTH-001) 5- Source: `extracted` 6- Component: `<component_name>` 7- Depth: `<surface|standard|deep>` 8- Extracted: `<date>` 9- Concerns: [<inferred concerns>] 10 11## Source Reference 12- Codebase: `<codebase_path>` 13- Entry Points: [<entry files>] 14- Lines Analyzed: <count> 15 16## Confidence Assessment 17| Requirement | Confidence | Evidence | Needs Review | 18|-------------|------------|----------|--------------| 19| FR-1: ... | high | Clear implementation + tests | no | 20| FR-2: ... | medium | Inferred from behavior | yes | 21 22## Contract 23 24### Functional Requirements 25**FR-1**: <Extracted requirement> 26- Evidence: `<file>:<line>` 27- Confidence: <high|medium|low> 28 29**FR-2**: <Extracted requirement> 30... 31 32### Interface 33```<language> 34<Extracted signatures>

Behavior

Given <precondition> When <action> Then <outcome>

  • Evidence: <test file>:<line> (if from test)

Design Decisions (Inferred)

DecisionEvidenceConfidence
Uses async runtimesrc/main.rs:23high
Token expiry = 1hrsrc/auth.rs:45medium

Uncertainties

  • <Question about unclear code>
  • <Ambiguous business rule>

Acceptance Criteria (Derived from Tests)

  • <Test-derived criterion>
  • Extracted from: <codebase_path>
  • Depends on: [<other specs if detected>]
  • Used by: [<detected consumers>]

### 4. Registration Phase

Update `docs/specs/system/spec-map.md`:

```markdown
## Components
| Spec | Source | Path | Concerns |
|------|--------|------|----------|
| COMP-AUTH-001 | extracted | extracted/auth-001.md | [SEC, REL] |

Rules:

  • Generate next available <nnn> for component (001, 002, ...)
  • Set source: extracted (immutable)
  • Infer concerns from code patterns
  • Never duplicate existing spec IDs

Confidence Levels

Every extracted requirement has a confidence level:

LevelCriteriaAction
highClear code + comprehensive testsUse as-is
mediumClear code OR tests, not bothReview recommended
lowInferred from indirect evidenceMust review

Downstream skills can filter by confidence if needed.

Output

  1. Primary: docs/specs/extracted/<component>-<nnn>.md - Formal specification
  2. Registry: Updated docs/specs/system/spec-map.md with immutable metadata
  3. Optional: docs/analysis/<component>-structure.md - Detailed structural analysis (deep mode)

Downstream Integration

With bk-plan

User: Plan implementation for auth module

1. LLM scans spec-map.md for auth-related specs
2. Finds COMP-AUTH-001 with source=extracted
3. Loads spec from docs/specs/extracted/auth-001.md
4. Notes confidence levels for planning
5. Proceeds with normal bk-plan workflow

Considerations:

  • Low-confidence requirements may need research tasks
  • Uncertainties become planning questions
  • Extracted interfaces may need refinement

With bk-verify

bk-verify --scope=full

1. Discovers spec in extracted/ directory
2. Scans codebase for SPEC-COMP-AUTH-001 markers
3. Reports coverage: which extracted requirements are marked in code

Note: Extracted specs may initially have no markers. The goal is to add markers during refactoring to establish traceability.

With bk-design

User: Refine the extracted auth spec

bk-design --spec-path=docs/specs/extracted/auth-001.md

1. Moves spec from extracted/ to component/ or interface/
2. Updates source from extracted → designed
3. Now treated as authoritative specification

Example Workflow

Analyzing Legacy Authentication

User: Analyze the auth module in src/auth/

bk-analyze:

1. Discovery:
   - Scans src/auth/ (12 files, 2,400 LOC)
   - Identifies: token validation, session management, refresh flow
   - Finds: 23 tests covering main paths

2. Extraction:
   - FR-1: Validate JWT signature (high confidence - tests present)
   - FR-2: Refresh expired tokens (medium - inferred from flow)
   - IF-1: Auth trait with validate(), refresh()
   - Concerns: [SEC, REL] inferred from code patterns

3. Synthesis:
   Generates: docs/specs/extracted/auth-001.md
   
4. Registration:
   Updates: docs/specs/system/spec-map.md
   | COMP-AUTH-001 | extracted | extracted/auth-001.md | [SEC, REL] |

Output: Extracted 8 requirements with 75% high confidence. 
        2 uncertainties flagged for review.

Chaining to Planning

User: Plan refactoring for the auth module

bk-plan:

1. Discovery:
   - Scans spec-map.md for auth specs
   - Finds COMP-AUTH-001, source=extracted
   - Loads: docs/specs/extracted/auth-001.md

2. Analysis:
   - Sees FR-2 (refresh tokens) has medium confidence
   - Sees uncertainty about rate limiting
   - Adds research task for unclear requirements

3. Planning:
   - Creates tasks for high-confidence requirements
   - Adds "clarify rate limiting" task for uncertainty
   - Generates: docs/plans/auth-refactor-plan-001.md

Guardrails

  1. Never overwrite existing specs - Generate new <nnn> if conflict
  2. Mark all uncertainties explicitly - Don't guess business intent
  3. Preserve evidence - Every requirement cites source file/line
  4. Confidence honesty - Don't inflate confidence to appear complete
  5. Source separation - Keep extracted specs in dedicated directory

Limitations

  • Cannot infer implicit business knowledge
  • Tests may not cover all production behaviors
  • Legacy code may have dead/unreachable paths
  • Extracted specs represent "as-is", not "should-be"

Future Enhancements (Out of Scope)

  • Multi-language analysis
  • Cross-repository dependency mapping
  • Automated confidence improvement via test analysis
  • Diff-based re-extraction for evolving codebases

相关技能

寻找 bk-analyze 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

为prompts.chat的信息反馈系统生成可定制的插件小部件

149.6k
0
AI

flags

Logo of vercel
vercel

React 框架

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Python中具有强大GPU加速的张量和动态神经网络

98.6k
0
开发者工具