KS
Killer-Skills

create_word_tree — how to use create_word_tree how to use create_word_tree, create_word_tree setup guide, create_word_tree alternative, create_word_tree vs language learning apps, what is create_word_tree, create_word_tree install, create_word_tree for language tutors, create_word_tree workflow, create_word_tree and vocabulary building

v1.0.0
GitHub

About this Skill

Perfect for Language Learning Agents needing structured vocabulary building capabilities. create_word_tree is a skill that generates word trees from daily notes in a language folder, using commands like date +%F to determine the target date.

Features

Determines target date using date +%F command
Confirms presence of vocab/ directory in current working directory
Reads source notes from vocab/daily/daily-notes-YYYY-MM-DD.md files
Extracts usable words from source notes, ignoring headings and blank lines
Treats source as empty if it only contains placeholder lines like -

# Core Topics

aerkn1 aerkn1
[0]
[0]
Updated: 3/1/2026

Quality Score

Top 5%
41
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add aerkn1/lang-tutor/create_word_tree

Agent Capability Analysis

The create_word_tree MCP Server by aerkn1 is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use create_word_tree, create_word_tree setup guide, create_word_tree alternative.

Ideal Agent Persona

Perfect for Language Learning Agents needing structured vocabulary building capabilities.

Core Value

Empowers agents to generate word trees from daily notes in Markdown format, utilizing file system access and date-based note retrieval, and provides a structured approach to vocabulary building for language learners and educators.

Capabilities Granted for create_word_tree MCP Server

Automating vocabulary tree generation for language learners
Generating daily word trees from Markdown notes
Enhancing language education with structured vocabulary tools

! Prerequisites & Limits

  • Requires file system access to vocabulary notes
  • Limited to directories containing 'vocab/' folder
  • Dependent on specific date formatting (YYYY-MM-DD) in file names
Project
SKILL.md
5.5 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Create Word Tree

Use this skill when the current working directory is a language folder such as german.

Workflow

  1. Determine the target date. Default to today using date +%F unless the user names a specific date.
  2. Confirm the current directory contains vocab/.
  3. Read the source note: vocab/daily/daily-notes-YYYY-MM-DD.md
  4. Treat the source as empty when it is missing or when it only contains headings, blank lines, or placeholder lines such as -.
  5. Extract every usable vocab item from the file. Treat each non-empty study line, bullet, numbered item, inline comma-separated item, or short phrase as a candidate entry.
  6. Normalize each entry into a readable lemma: keep the original German form, remove list markers, and preserve short fixed phrases when the source note uses a phrase instead of a single word.
  7. Classify each entry into the best-fit part of speech: noun, verb, adjective, adverb, pronoun, conjunction, preposition, article, phrase, or other.
  8. Add the correct article first whenever the entry is a noun and the article can be stated or reliably inferred.
  9. Expand each entry with grammar details that actually exist for that type: verbs: ich, du, er/sie/es, wir, ihr, sie/Sie, plus simple past and past participle (v2-v3 style forms) nouns and noun phrases: singular and plural when available, plus nominative, accusative, and dative article or declension forms when applicable adjectives or similar inflected words: nominative, accusative, and dative forms only when they are genuinely useful and reliable non-inflected items: do not invent conjugations or declensions
  10. Prefer accuracy over completeness. If a form is uncertain, omit it and note that it was not confidently derived.
  11. Verify the CEFR level for each entry with web search before assigning a level.
  12. Normalize CEFR levels to A1, A2, B1, B2, C1, or C2.
  13. If no source clearly provides a CEFR level, place the item in UNKNOWN and state that the level could not be verified.
  14. Rewrite the source note at vocab/daily/daily-notes-YYYY-MM-DD.md into the normalized master schema after analysis.
  15. The rewritten daily note must preserve the original captured items in a Raw Capture section, then add a Normalized Word Tree section that groups every normalized entry by part of speech.
  16. In the rewritten daily note, each entry should include:
  • normalized lemma or phrase
  • original source form when normalization changed it
  • meaning
  • grammar forms that apply
  • provisional or verified CEFR level
  • daily CEFR file path
  • cumulative CEFR file path
  1. Group entries by CEFR level and create one dated file per level at: vocab/<CEFR>/daily/vocab-<CEFR>-YYYY-MM-DD.md
  2. Update the cumulative CEFR file for each level at: vocab/<CEFR>/vocab-<CEFR>.md
  3. If a dated target file already exists, do not overwrite it unless the user explicitly asks to regenerate it.
  4. When updating the cumulative CEFR file, add only new words for that level. Do not duplicate entries that are already present.
  5. After creating, regenerating, or skipping output files, append a summary entry to ../CHANGELOG.md by running: ../scripts/append-skill-log.sh "create_word_tree" "<language>" "<summary>"

CEFR Lookup Rules

  • Use web search for each entry because CEFR labels are source-dependent and can change across dictionaries.
  • Prefer dictionary or language-learning sources that explicitly publish CEFR labels.
  • If multiple sources disagree, choose the level supported by the strongest explicit source and briefly note the ambiguity.
  • Make it clear when a CEFR level is inferred from partial evidence rather than directly labeled by the source.

Output Structure

Each generated dated CEFR file must include:

  • A title with the CEFR level and date
  • Source Note
  • Level Summary
  • One section per part of speech present in that file

Each cumulative CEFR file must include:

  • A title with the CEFR level
  • Level Summary
  • Source Files
  • One section per part of speech present in that level

Each vocab entry should be rendered as a compact, readable block that includes:

  • the main word or phrase
  • article first, if applicable
  • part of speech
  • the grammar forms that apply
  • a short CEFR note with the source or confidence

The rewritten daily note must include:

  • the original title with the date
  • Raw Capture
  • Normalized Word Tree
  • one section per part of speech present in the source

Generation Rules

  • Cover every usable vocab item from the source note. Do not silently drop entries.
  • Cover every usable vocab item in both the rewritten daily note and the CEFR-split files.
  • Keep the per-day CEFR files under vocab/<CEFR>/daily/.
  • Keep the cumulative CEFR files at vocab/<CEFR>/vocab-<CEFR>.md.
  • When a new item belongs to a level, update both the per-day file and the cumulative file for that level.
  • Keep entries grouped by part of speech inside each CEFR file.
  • Keep entries grouped by part of speech inside the rewritten daily note as well.
  • Preserve the original source wording when the note contains a phrase, but still classify it as precisely as possible.
  • Do not fabricate forms that are not valid for the entry type.
  • When the source note is empty or missing, report that clearly and do not create misleading vocab files or a misleading normalized rewrite.
  • The changelog summary should include the target date, the source note path, whether the daily note was rewritten, and which CEFR files were created, regenerated, or skipped.

Related Skills

Looking for an alternative to create_word_tree or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication