KS
Killer-Skills

develop-ai-functions-example — how to use develop-ai-functions-example how to use develop-ai-functions-example, develop-ai-functions-example alternative, develop-ai-functions-example setup guide, what is develop-ai-functions-example, develop-ai-functions-example vs Mux, install develop-ai-functions-example, develop-ai-functions-example TypeScript library, AI SDK integration with Mux, multi-modal LLMs with develop-ai-functions-example

v1.0.0
GitHub

About this Skill

Ideal for AI Agents like Cursor, Windsurf, and Claude Code needing advanced multi-modal LLM integration and AI SDK function development develop-ai-functions-example is a TypeScript library for integrating AI SDK functions with Mux video accounts, providing examples for validation and testing.

Features

Provides scripts for validating and testing AI SDK functions in the examples/ai-functions/ directory
Organizes examples by AI SDK function in examples/ai-functions/src/
Supports non-streaming text generation with generateText() in generate-text/ directory
Includes example categories for different AI SDK functions, such as streaming and non-streaming text generation

# Core Topics

muxinc muxinc
[57]
[2]
Updated: 2/24/2026

Quality Score

Top 5%
50
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add muxinc/ai

Agent Capability Analysis

The develop-ai-functions-example MCP Server by muxinc is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use develop-ai-functions-example, develop-ai-functions-example alternative, develop-ai-functions-example setup guide.

Ideal Agent Persona

Ideal for AI Agents like Cursor, Windsurf, and Claude Code needing advanced multi-modal LLM integration and AI SDK function development

Core Value

Empowers agents to connect videos in Mux accounts to multi-modal LLMs using a TypeScript library, enabling seamless validation, testing, and iteration on AI SDK functions across providers, including generating text with `generateText()`

Capabilities Granted for develop-ai-functions-example MCP Server

Validating AI SDK functions across multiple providers
Testing and iterating on text generation with `generateText()`
Integrating videos with multi-modal LLMs for enhanced content analysis

! Prerequisites & Limits

  • Requires Mux account and TypeScript library setup
  • Limited to AI SDK functions and multi-modal LLMs supported by the library
Project
SKILL.md
7.4 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

AI Functions Examples

The examples/ai-functions/ directory contains scripts for validating, testing, and iterating on AI SDK functions across providers.

Example Categories

Examples are organized by AI SDK function in examples/ai-functions/src/:

DirectoryPurpose
generate-text/Non-streaming text generation with generateText()
stream-text/Streaming text generation with streamText()
generate-object/Structured output generation with generateObject()
stream-object/Streaming structured output with streamObject()
agent/ToolLoopAgent examples for agentic workflows
embed/Single embedding generation with embed()
embed-many/Batch embedding generation with embedMany()
generate-image/Image generation with generateImage()
generate-speech/Text-to-speech with generateSpeech()
transcribe/Audio transcription with transcribe()
rerank/Document reranking with rerank()
middleware/Custom middleware implementations
registry/Provider registry setup and usage
telemetry/OpenTelemetry integration
complex/Multi-component examples (agents, routers)
lib/Shared utilities (not examples)
tools/Reusable tool definitions

File Naming Convention

Examples follow the pattern: {provider}-{feature}.ts

PatternExampleDescription
{provider}.tsopenai.tsBasic provider usage
{provider}-{feature}.tsopenai-tool-call.tsSpecific feature
{provider}-{sub-provider}.tsamazon-bedrock-anthropic.tsProvider with sub-provider
{provider}-{sub-provider}-{feature}.tsgoogle-vertex-anthropic-cache-control.tsSub-provider with feature

Example Structure

All examples use the run() wrapper from lib/run.ts which:

  • Loads environment variables from .env
  • Provides error handling with detailed API error logging

Basic Template

typescript
1import { providerName } from "@ai-sdk/provider-name"; 2import { generateText } from "ai"; 3 4import { run } from "../lib/run"; 5 6run(async () => { 7 const result = await generateText({ 8 model: providerName("model-id"), 9 prompt: "Your prompt here.", 10 }); 11 12 console.warn(result.text); 13 console.warn("Token usage:", result.usage); 14 console.warn("Finish reason:", result.finishReason); 15});

Streaming Template

typescript
1import { providerName } from "@ai-sdk/provider-name"; 2import { streamText } from "ai"; 3 4import { printFullStream } from "../lib/print-full-stream"; 5import { run } from "../lib/run"; 6 7run(async () => { 8 const result = streamText({ 9 model: providerName("model-id"), 10 prompt: "Your prompt here.", 11 }); 12 13 await printFullStream({ result }); 14});

Tool Calling Template

typescript
1import { providerName } from "@ai-sdk/provider-name"; 2import { generateText, tool } from "ai"; 3import { z } from "zod"; 4 5import { run } from "../lib/run"; 6 7run(async () => { 8 const result = await generateText({ 9 model: providerName("model-id"), 10 tools: { 11 myTool: tool({ 12 description: "Tool description", 13 inputSchema: z.object({ 14 param: z.string().describe("Parameter description"), 15 }), 16 execute: async ({ param }) => { 17 return { result: `Processed: ${param}` }; 18 }, 19 }), 20 }, 21 prompt: "Use the tool to...", 22 }); 23 24 console.warn(JSON.stringify(result, null, 2)); 25});

Structured Output Template

typescript
1import { providerName } from "@ai-sdk/provider-name"; 2import { generateObject } from "ai"; 3import { z } from "zod"; 4 5import { run } from "../lib/run"; 6 7run(async () => { 8 const result = await generateObject({ 9 model: providerName("model-id"), 10 schema: z.object({ 11 name: z.string(), 12 items: z.array(z.string()), 13 }), 14 prompt: "Generate a...", 15 }); 16 17 console.warn(JSON.stringify(result.object, null, 2)); 18 console.warn("Token usage:", result.usage); 19});

Running Examples

From the examples/ai-functions directory:

bash
1pnpm tsx src/generate-text/openai.ts 2pnpm tsx src/stream-text/openai-tool-call.ts 3pnpm tsx src/agent/openai-generate.ts

When to Write Examples

Write examples when:

  1. Adding a new provider: Create basic examples for each supported API (generateText, streamText, generateObject, etc.)

  2. Implementing a new feature: Demonstrate the feature with at least one provider example

  3. Reproducing a bug: Create an example that shows the issue for debugging

  4. Adding provider-specific options: Show how to use providerOptions for provider-specific settings

  5. Creating test fixtures: Use examples to generate API response fixtures (see capture-api-response-test-fixture skill)

Utility Helpers

The lib/ directory contains shared utilities:

FilePurpose
run.tsError-handling wrapper with .env loading
print.tsClean object printing (removes undefined values)
print-full-stream.tsColored streaming output for tool calls, reasoning, text
save-raw-chunks.tsSave streaming chunks for test fixtures
present-image.tsDisplay images in terminal
save-audio.tsSave audio files to disk

Using print utilities

typescript
1import { print } from "../lib/print"; 2 3// Pretty print objects without undefined values 4print("Result:", result); 5print("Usage:", result.usage, { depth: 2 });

Using printFullStream

typescript
1// import { printFullStream } from '../lib/print-full-stream'; 2 3// const result = streamText({ ... }); 4// await printFullStream({ result }); // Colored output for text, tool calls, reasoning

Reusable Tools

The tools/ directory contains reusable tool definitions:

typescript
1import { weatherTool } from "../tools/weather-tool"; 2 3const result = await generateText({ 4 model: openai("gpt-4o"), 5 tools: { weather: weatherTool }, 6 prompt: "What is the weather in San Francisco?", 7});

Best Practices

  1. Keep examples focused: Each example should demonstrate one feature or use case

  2. Use descriptive prompts: Make it clear what the example is testing

  3. Handle errors gracefully: The run() wrapper handles this automatically

  4. Use realistic model IDs: Use actual model IDs that work with the provider

  5. Add comments for complex logic: Explain non-obvious code patterns

  6. Reuse tools when appropriate: Use weatherTool or create new reusable tools in tools/

Related Skills

Looking for an alternative to develop-ai-functions-example or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication