langfuse — community langfuse, langfuse-mcp, community, ide skills

v1.0.3

About this Skill

Perfect for AI Debugging Agents needing enhanced observability and tracing capabilities through Langfuse Debug AI traces, find exceptions, analyze sessions, and manage prompts via Langfuse MCP. Use when debugging AI pipelines, investigating errors, analyzing latency, managing prompt versions, or setting

avivsinai avivsinai
[0]
[0]
Updated: 3/12/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 9/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
9/11
Quality Score
50
Canonical Locale
en
Detected Body Locale
en

Perfect for AI Debugging Agents needing enhanced observability and tracing capabilities through Langfuse Debug AI traces, find exceptions, analyze sessions, and manage prompts via Langfuse MCP. Use when debugging AI pipelines, investigating errors, analyzing latency, managing prompt versions, or setting

Core Value

Empowers agents to query Langfuse trace data for advanced debugging, leveraging API keys and self-hosted instances for seamless integration with datasets and evaluation sets, enhancing overall system performance and exception handling

Ideal Agent Persona

Perfect for AI Debugging Agents needing enhanced observability and tracing capabilities through Langfuse

Capabilities Granted for langfuse

Debugging AI system exceptions
Analyzing performance bottlenecks through traces
Setting up Langfuse observability for enhanced system monitoring

! Prerequisites & Limits

  • Requires Langfuse API key or self-hosted instance
  • Needs MCP installation for integration

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is langfuse?

Perfect for AI Debugging Agents needing enhanced observability and tracing capabilities through Langfuse Debug AI traces, find exceptions, analyze sessions, and manage prompts via Langfuse MCP. Use when debugging AI pipelines, investigating errors, analyzing latency, managing prompt versions, or setting

How do I install langfuse?

Run the command: npx killer-skills add avivsinai/langfuse-mcp/langfuse. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for langfuse?

Key use cases include: Debugging AI system exceptions, Analyzing performance bottlenecks through traces, Setting up Langfuse observability for enhanced system monitoring.

Which IDEs are compatible with langfuse?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for langfuse?

Requires Langfuse API key or self-hosted instance. Needs MCP installation for integration.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add avivsinai/langfuse-mcp/langfuse. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use langfuse immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

langfuse

Install langfuse, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Langfuse Skill

Debug your AI systems through Langfuse observability.

Triggers: langfuse, traces, debug AI, find exceptions, set up langfuse, what went wrong, why is it slow, datasets, evaluation sets

Setup

Step 1: Get credentials from https://cloud.langfuse.com → Settings → API Keys

If self-hosted, use your instance URL for LANGFUSE_HOST and create keys there.

Step 2: Install MCP (pick one):

bash
1# Claude Code (project-scoped, shared via .mcp.json) 2claude mcp add \ 3 --scope project \ 4 --env LANGFUSE_PUBLIC_KEY=pk-... \ 5 --env LANGFUSE_SECRET_KEY=sk-... \ 6 --env LANGFUSE_HOST=https://cloud.langfuse.com \ 7 langfuse -- uvx --python 3.11 langfuse-mcp 8 9# Codex CLI (user-scoped, stored in ~/.codex/config.toml) 10codex mcp add langfuse \ 11 --env LANGFUSE_PUBLIC_KEY=pk-... \ 12 --env LANGFUSE_SECRET_KEY=sk-... \ 13 --env LANGFUSE_HOST=https://cloud.langfuse.com \ 14 -- uvx --python 3.11 langfuse-mcp

Step 3: Restart CLI, verify with /mcp (Claude) or codex mcp list (Codex)

Step 4: Test: fetch_traces(age=60)

Read-Only Mode

For safer observability without risk of modifying prompts or datasets, enable read-only mode:

bash
1# CLI flag 2langfuse-mcp --read-only 3 4# Or environment variable 5LANGFUSE_MCP_READ_ONLY=true

This disables write tools: create_text_prompt, create_chat_prompt, update_prompt_labels, create_dataset, create_dataset_item, delete_dataset_item.

For manual .mcp.json setup or troubleshooting, see references/setup.md.


Playbooks

"Where are the errors?"

find_exceptions(age=1440, group_by="file")

→ Shows error counts by file. Pick the worst offender.

find_exceptions_in_file(filepath="src/ai/chat.py", age=1440)

→ Lists specific exceptions. Grab a trace_id.

get_exception_details(trace_id="...")

→ Full stacktrace and context.


"What happened in this interaction?"

fetch_traces(age=60, user_id="...")

→ Find the trace. Note the trace_id.

If you don't know the user_id, start with:

fetch_traces(age=60)
fetch_trace(trace_id="...", include_observations=true)

→ See all LLM calls in the trace.

fetch_observation(observation_id="...")

→ Inspect a specific generation's input/output.


"Why is it slow?"

fetch_observations(age=60, type="GENERATION")

→ Find recent LLM calls. Look for high latency.

fetch_observation(observation_id="...")

→ Check token counts, model, timing.


"What's this user experiencing?"

get_user_sessions(user_id="...", age=1440)

→ List their sessions.

get_session_details(session_id="...")

→ See all traces in the session.


"Manage datasets"

list_datasets()

→ See all datasets.

get_dataset(name="evaluation-set-v1")

→ Get dataset details.

list_dataset_items(dataset_name="evaluation-set-v1", page=1, limit=10)

→ Browse items in the dataset.

create_dataset(name="qa-test-cases", description="QA evaluation set")

→ Create a new dataset.

create_dataset_item(
  dataset_name="qa-test-cases",
  input={"question": "What is 2+2?"},
  expected_output={"answer": "4"}
)

→ Add test cases.

create_dataset_item(
  dataset_name="qa-test-cases",
  item_id="item_123",
  input={"question": "What is 3+3?"},
  expected_output={"answer": "6"}
)

→ Upsert: updates existing item by id or creates if missing.


"Manage prompts"

list_prompts()

→ See all prompts with labels.

get_prompt(name="...", label="production")

→ Fetch current production version.

create_text_prompt(name="...", prompt="...", labels=["staging"])

→ Create new version in staging.

update_prompt_labels(name="...", version=N, labels=["production"])

→ Promote to production. (Rollback = re-apply label to older version)


Quick Reference

TaskTool
List tracesfetch_traces(age=N)
Get trace detailsfetch_trace(trace_id="...", include_observations=true)
List LLM callsfetch_observations(age=N, type="GENERATION")
Get observationfetch_observation(observation_id="...")
Error countget_error_count(age=N)
Find exceptionsfind_exceptions(age=N, group_by="file")
List sessionsfetch_sessions(age=N)
User sessionsget_user_sessions(user_id="...", age=N)
List promptslist_prompts()
Get promptget_prompt(name="...", label="production")
List datasetslist_datasets()
Get datasetget_dataset(name="...")
List dataset itemslist_dataset_items(dataset_name="...", limit=N)
Create/update dataset itemcreate_dataset_item(dataset_name="...", item_id="...")

age = minutes to look back (max 10080 = 7 days)


Troubleshooting

MCP connection fails

  • Verify credentials: check LANGFUSE_PUBLIC_KEY, LANGFUSE_SECRET_KEY, LANGFUSE_HOST
  • Restart CLI after adding/updating MCP config
  • Test MCP independently: fetch_traces(age=60) — if this fails, the issue is MCP, not the skill
  • See references/setup.md for detailed troubleshooting

No traces found

  • Increase the age parameter (default lookback may be too short)
  • Verify your application is sending traces to the correct Langfuse project
  • Check LANGFUSE_HOST points to the right instance (cloud vs self-hosted)

Permission denied

  • Regenerate API keys from Langfuse dashboard
  • Ensure keys have the required scopes for the operation
  • Write operations require read-write keys (not read-only mode)

References

  • references/tool-reference.md — Full parameter docs, filter semantics, response schemas
  • references/setup.md — Manual setup, troubleshooting, advanced configuration

Related Skills

Looking for an alternative to langfuse or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer