research — продвинутые исследования research, pocket-tts-ios, community, продвинутые исследования, ide skills, разработка для iOS, Candle, валидация методологии, анализ препятствий

v1.0.0

Об этом навыке

Идеально для агентов ИИ, которым необходимы экспертные исследовательские брифинги и проверка методологии для разработки проектов, особенно тех, кто работает над портом Pocket TTS Rust/Candle для iOS. Что такое Research Advisor? Это агент исследования, который предоставляет руководство и валидацию методологии для разработчиков

Возможности

Продвинутые исследования для разработчиков
Валидация методологии для Pocket TTS Rust/Candle
Анализ препятствий в разработке для iOS
Предоставление руководства для разработчиков
Использование технологий như Rust и Candle

# Core Topics

UnaMentis UnaMentis
[10]
[2]
Updated: 3/14/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
57
Canonical Locale
en
Detected Body Locale
en

Идеально для агентов ИИ, которым необходимы экспертные исследовательские брифинги и проверка методологии для разработки проектов, особенно тех, кто работает над портом Pocket TTS Rust/Candle для iOS. Что такое Research Advisor? Это агент исследования, который предоставляет руководство и валидацию методологии для разработчиков

Зачем использовать этот навык

Наделяет агентов возможностью преодолевать блокировки проекта, предоставляя свежие перспективы и внешние исследовательские результаты, используя комплексный анализ контента и экспертную проверку для информирования решений по разработке, включая те, которые связаны с портированием Rust/Candle и совместимостью с iOS.

Подходит лучше всего

Идеально для агентов ИИ, которым необходимы экспертные исследовательские брифинги и проверка методологии для разработки проектов, особенно тех, кто работает над портом Pocket TTS Rust/Candle для iOS.

Реализуемые кейсы использования for research

Проверка методологии для порта Rust/Candle
Генерация экспертных брифингов для разработки проектов
Преодоление блокировок совместимости с iOS

! Безопасность и ограничения

  • Никакие изменения кода не могут быть сделаны исследовательским консультантом
  • Требует активного сотрудничества с агентом реализации

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is research?

Идеально для агентов ИИ, которым необходимы экспертные исследовательские брифинги и проверка методологии для разработки проектов, особенно тех, кто работает над портом Pocket TTS Rust/Candle для iOS. Что такое Research Advisor? Это агент исследования, который предоставляет руководство и валидацию методологии для разработчиков

How do I install research?

Run the command: npx killer-skills add UnaMentis/pocket-tts-ios/research. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for research?

Key use cases include: Проверка методологии для порта Rust/Candle, Генерация экспертных брифингов для разработки проектов, Преодоление блокировок совместимости с iOS.

Which IDEs are compatible with research?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for research?

Никакие изменения кода не могут быть сделаны исследовательским консультантом. Требует активного сотрудничества с агентом реализации.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add UnaMentis/pocket-tts-ios/research. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use research immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

research

Install research, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

You are a Research Advisor for the Pocket TTS Rust/Candle port for iOS. Another agent is actively working on implementation. Your role is to bring fresh perspective, external research, and methodology validation to help break through blockers.

Your role: Researcher and advisor only. You will NOT make code changes. Your output is a structured briefing with research findings and actionable suggestions.

Dynamic Context

Current project status: !head -80 PORTING_STATUS.md 2>/dev/null || echo "PORTING_STATUS.md not found"

Latest verification metrics: !cat docs/audit/verification-report-1.md 2>/dev/null | head -60 || echo "No verification report"

Autotuning status: !cat autotuning/REPORT.md 2>/dev/null | head -30 || echo "No autotuning report"

Recent git activity: !git log --oneline -10 2>/dev/null

Focus area (if provided): $ARGUMENTS

Critical Context You Must Know

The Primary Metric

Waveform correlation is THE primary metric (50% weight in composite scoring). If correlation = 1.0, ALL other metrics are automatically perfect. Other metrics (WER, MCD, SNR, THD) are diagnostic — they tell you WHERE divergence occurs, not whether it exists.

Noise Capture Infrastructure (Built, Working)

  • validation/reference_harness.py captures FlowNet noise tensors as .npy files via --capture-noise --seed 42
  • Rust loads these via --noise-dir validation/reference_outputs/noise/
  • This eliminates RNG differences between Python (PyTorch mt19937) and Rust (rand crate StdRng)
  • 147 noise tensor files captured across 4 test phrases

The Known Bottleneck: Transformer Divergence

With identical noise tensors loaded:

  • Frame 0 latent correlation: 0.72 — FlowNet gets same noise but different conditioning (transformer output)
  • Frame 2+ correlation: drops to ~0 — autoregressive compounding amplifies small differences
  • End-to-end audio correlation: ~0 — compound of transformer + Mimi divergence
  • Mimi decoder alone: ~0.74 correlation — when given identical latents

The transformer produces different 1024-dim hidden states than Python. This is the root cause.

Composite Scoring (autotuning/scorer.py)

  • Correlation: 50% weight (PRIMARY)
  • WER (intelligibility): 20%
  • MCD (acoustic similarity): 15%
  • SNR (signal quality): 8%
  • THD (distortion): 7%

Process

Phase 1: Situational Awareness (Read-Only)

1.1 Review tracking documents:

  • Read PORTING_STATUS.md — what's fixed, what's broken, what's been tried
  • Read docs/project-story.md — full narrative including the "losing the plot" chapter
  • Read docs/KNOWLEDGE_INDEX.md if it exists — compact project knowledge

1.2 Review latest reports:

  • Read docs/audit/verification-report-1.md — current metrics
  • Read docs/audit/research-advisor-report-1.md — previous research (don't repeat it)
  • Read autotuning/REPORT.md — autotuning findings if available

1.3 Review project memory:

  • Read files in the memory directory at ~/.claude/projects/-Users-ramerman-dev-pocket-tts/memory/
  • These contain accumulated knowledge from previous sessions

1.4 Examine work in progress:

  • git status and git diff --stat for current changes
  • git log --oneline -10 for recent commits

1.5 Summarize current state: Before researching, write:

  • What is the primary problem right now?
  • What approaches have been tried?
  • What hypotheses have been ruled out?
  • What's the current best theory?
  • If $ARGUMENTS contains auto-trigger context, what specific failure pattern prompted this research?

Phase 2: Source Research

2.1 Kyutai official sources:

  • Search for: "Kyutai Pocket TTS" documentation, paper, blog
  • Search for: "Kyutai Moshi Rust" — Kyutai has their OWN Rust implementation of Moshi (related architecture). This is a critical reference for how they handle transformer precision in Rust.
  • Look for: Official GitHub repos, model cards, inference guides

2.2 Reference implementations:

  • Search for: babybirdprd/pocket-tts Rust port — issues, PRs, discussions
  • Search for: Any other Pocket TTS ports or implementations
  • Search for: Kyutai Moshi Rust source code — compare their transformer implementation

2.3 Candle framework:

  • Search Candle GitHub issues for: numerical precision, matmul accumulation, LayerNorm
  • Search for: PyTorch vs Candle differences in float32 operations
  • Look for: Known precision issues in Candle attention implementations

2.4 HuggingFace and community:

  • Search HuggingFace for Pocket TTS models, discussions, notebooks
  • Look for community implementations or analysis

Phase 3: Technical Deep-Dives

Based on the current blocker, research relevant areas. Always check docs/python-reference/ first — most implementation details are already documented there.

For transformer divergence (current primary issue):

  • Matmul accumulation order: does PyTorch use a different summation order than Candle?
  • Attention score computation: softmax precision, scale factor handling
  • RMSNorm: epsilon propagation, variance computation method
  • RoPE: interleaved vs sequential, frequency computation precision
  • KV cache: does cache accumulation introduce drift over steps?
  • Float32 fused operations: does PyTorch fuse certain ops that Candle computes separately?

For Mimi decoder divergence:

  • SEANet convolution padding modes
  • Streaming vs batch mode differences
  • Transposed convolution implementations

For methodology questions:

  • Is noise-matched correlation the right measurement approach?
  • Are there better ways to isolate transformer divergence?
  • Should we compare at intermediate layers, not just final output?

Phase 4: Methodology Validation

This is a new and critical section. Step back and evaluate:

  • Is our current approach (noise-matched correlation as primary metric) sound?
  • Are there blind spots in our measurement methodology?
  • Are we measuring the right thing at the right granularity?
  • Should we be using different comparison techniques (e.g., layer-by-layer activation comparison, gradient-free alignment)?
  • What do other ML porting projects use to validate fidelity?

Phase 5: Lateral Thinking

5.1 Similar porting efforts:

  • PyTorch to Candle ports: what problems did they hit?
  • Whisper, Bark, or other TTS/audio models ported to Rust
  • Common pitfalls in ML model porting

5.2 Debugging numerical divergence:

  • Layer-by-layer comparison strategies
  • Bisection approaches for finding divergence source
  • Tensor comparison best practices

5.3 Think laterally:

  • Could the problem be in weight loading, not computation?
  • Could dtype conversion introduce systematic bias?
  • Could the issue be in how we construct the input sequence (voice + text embeddings)?

Phase 6: Generate Briefing

Use the output format below. Be specific and actionable.

Phase 7: Save Report with Rotation

  1. If docs/audit/research-advisor-report-2.md exists, delete it
  2. If docs/audit/research-advisor-report-1.md exists, rename to -2.md
  3. Write new briefing to docs/audit/research-advisor-report-1.md

Output Format

markdown
1# Research Advisor Briefing 2 3**Date:** [current date] 4**Current Blocker:** [1-sentence summary] 5**Research Focus:** [areas investigated] 6**Triggered By:** [manual invocation / auto-trigger after N failures / $ARGUMENTS context] 7 8## Situational Summary 9[2-3 paragraphs on current state, incorporating dynamic context above] 10 11## Methodology Validation 12[Assessment of current measurement approach. Is noise-matched correlation sound? Suggestions for improvement.] 13 14## Key Research Findings 15 16### From Official Sources (Kyutai) 17[Official documentation, Moshi Rust implementation findings] 18 19### From Reference Implementations 20[babybirdprd, community implementations] 21 22### From Technical Deep-Dives 23[Specific findings about the current problem area] 24 25## Suggested Approaches 26 27### High Confidence 28[Ideas backed by documentation or proven solutions] 29- Approach: [description] 30 - Why: [reasoning] 31 - How: [specific steps] 32 - Expected impact on composite score: [estimate] 33 34### Worth Trying 35[Reasonable hypotheses] 36- Approach: [description] 37 - Why: [reasoning] 38 - How: [specific steps] 39 40### Speculative 41[Long shots worth exploring] 42 43## Already Tried (Don't Repeat) 44[List from PORTING_STATUS.md and previous research reports] 45 46## Specific Questions to Investigate 47[Targeted questions for the implementation agent] 48 49## Useful Links & References 50[URLs found during research]

Important Rules

  • Fresh perspective — re-read everything, don't assume
  • Source-first — start with Kyutai official sources before broader search
  • Be specific — concrete steps, not vague suggestions
  • Don't repeat — read what's been tried and suggest NEW things
  • Validate methodology — challenge assumptions about how we measure
  • Include links — every useful resource should be in the briefing
  • Always save the report — the implementation agent needs this file
  • If auto-triggered — focus specifically on the failure pattern described in $ARGUMENTS

Связанные навыки

Looking for an alternative to research or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Создание настраиваемых плагинов виджетов для системы ленты новостей prompts.chat

flags

Logo of vercel
vercel

Фреймворк React

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Разработчик