KS
Killer-Skills

develop-ai-functions-example — Categories.community

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing advanced AI-powered application development with TypeScript and Next.js The AI Toolkit for TypeScript. From the creators of Next.js, the AI SDK is a free open-source library for building AI-powered applications and agents

vercel vercel
[0]
[0]
Updated: 2/20/2026

Quality Score

Top 5%
55
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add vercel/ai/develop-ai-functions-example

Agent Capability Analysis

The develop-ai-functions-example MCP Server by vercel is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion.

Ideal Agent Persona

Perfect for AI Agents needing advanced AI-powered application development with TypeScript and Next.js

Core Value

Empowers agents to build AI-powered applications and agents with the AI SDK, utilizing libraries like the AI Toolkit for TypeScript, and generating text with `generateText()` across providers, supporting protocols like non-streaming text generation

Capabilities Granted for develop-ai-functions-example MCP Server

Validating AI SDK functions across providers
Testing and iterating on AI-powered applications
Generating non-streaming text with `generateText()`
Building custom AI functions with the AI Toolkit

! Prerequisites & Limits

  • Requires TypeScript and Next.js setup
  • Limited to AI SDK functions and providers
Project
SKILL.md
7.4 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

AI Functions Examples

The examples/ai-functions/ directory contains scripts for validating, testing, and iterating on AI SDK functions across providers.

Example Categories

Examples are organized by AI SDK function in examples/ai-functions/src/:

DirectoryPurpose
generate-text/Non-streaming text generation with generateText()
stream-text/Streaming text generation with streamText()
generate-object/Structured output generation with generateObject()
stream-object/Streaming structured output with streamObject()
agent/ToolLoopAgent examples for agentic workflows
embed/Single embedding generation with embed()
embed-many/Batch embedding generation with embedMany()
generate-image/Image generation with generateImage()
generate-speech/Text-to-speech with generateSpeech()
transcribe/Audio transcription with transcribe()
rerank/Document reranking with rerank()
middleware/Custom middleware implementations
registry/Provider registry setup and usage
telemetry/OpenTelemetry integration
complex/Multi-component examples (agents, routers)
lib/Shared utilities (not examples)
tools/Reusable tool definitions

File Naming Convention

Examples follow the pattern: {provider}-{feature}.ts

PatternExampleDescription
{provider}.tsopenai.tsBasic provider usage
{provider}-{feature}.tsopenai-tool-call.tsSpecific feature
{provider}-{sub-provider}.tsamazon-bedrock-anthropic.tsProvider with sub-provider
{provider}-{sub-provider}-{feature}.tsgoogle-vertex-anthropic-cache-control.tsSub-provider with feature

Example Structure

All examples use the run() wrapper from lib/run.ts which:

  • Loads environment variables from .env
  • Provides error handling with detailed API error logging

Basic Template

typescript
1import { providerName } from '@ai-sdk/provider-name'; 2import { generateText } from 'ai'; 3import { run } from '../lib/run'; 4 5run(async () => { 6 const result = await generateText({ 7 model: providerName('model-id'), 8 prompt: 'Your prompt here.', 9 }); 10 11 console.log(result.text); 12 console.log('Token usage:', result.usage); 13 console.log('Finish reason:', result.finishReason); 14});

Streaming Template

typescript
1import { providerName } from '@ai-sdk/provider-name'; 2import { streamText } from 'ai'; 3import { printFullStream } from '../lib/print-full-stream'; 4import { run } from '../lib/run'; 5 6run(async () => { 7 const result = streamText({ 8 model: providerName('model-id'), 9 prompt: 'Your prompt here.', 10 }); 11 12 await printFullStream({ result }); 13});

Tool Calling Template

typescript
1import { providerName } from '@ai-sdk/provider-name'; 2import { generateText, tool } from 'ai'; 3import { z } from 'zod'; 4import { run } from '../lib/run'; 5 6run(async () => { 7 const result = await generateText({ 8 model: providerName('model-id'), 9 tools: { 10 myTool: tool({ 11 description: 'Tool description', 12 inputSchema: z.object({ 13 param: z.string().describe('Parameter description'), 14 }), 15 execute: async ({ param }) => { 16 return { result: `Processed: ${param}` }; 17 }, 18 }), 19 }, 20 prompt: 'Use the tool to...', 21 }); 22 23 console.log(JSON.stringify(result, null, 2)); 24});

Structured Output Template

typescript
1import { providerName } from '@ai-sdk/provider-name'; 2import { generateObject } from 'ai'; 3import { z } from 'zod'; 4import { run } from '../lib/run'; 5 6run(async () => { 7 const result = await generateObject({ 8 model: providerName('model-id'), 9 schema: z.object({ 10 name: z.string(), 11 items: z.array(z.string()), 12 }), 13 prompt: 'Generate a...', 14 }); 15 16 console.log(JSON.stringify(result.object, null, 2)); 17 console.log('Token usage:', result.usage); 18});

Running Examples

From the examples/ai-functions directory:

bash
1pnpm tsx src/generate-text/openai.ts 2pnpm tsx src/stream-text/openai-tool-call.ts 3pnpm tsx src/agent/openai-generate.ts

When to Write Examples

Write examples when:

  1. Adding a new provider: Create basic examples for each supported API (generateText, streamText, generateObject, etc.)

  2. Implementing a new feature: Demonstrate the feature with at least one provider example

  3. Reproducing a bug: Create an example that shows the issue for debugging

  4. Adding provider-specific options: Show how to use providerOptions for provider-specific settings

  5. Creating test fixtures: Use examples to generate API response fixtures (see capture-api-response-test-fixture skill)

Utility Helpers

The lib/ directory contains shared utilities:

FilePurpose
run.tsError-handling wrapper with .env loading
print.tsClean object printing (removes undefined values)
print-full-stream.tsColored streaming output for tool calls, reasoning, text
save-raw-chunks.tsSave streaming chunks for test fixtures
present-image.tsDisplay images in terminal
save-audio.tsSave audio files to disk

Using print utilities

typescript
1import { print } from '../lib/print'; 2 3// Pretty print objects without undefined values 4print('Result:', result); 5print('Usage:', result.usage, { depth: 2 });

Using printFullStream

typescript
1import { printFullStream } from '../lib/print-full-stream'; 2 3const result = streamText({ ... }); 4await printFullStream({ result }); // Colored output for text, tool calls, reasoning

Reusable Tools

The tools/ directory contains reusable tool definitions:

typescript
1import { weatherTool } from '../tools/weather-tool'; 2 3const result = await generateText({ 4 model: openai('gpt-4o'), 5 tools: { weather: weatherTool }, 6 prompt: 'What is the weather in San Francisco?', 7});

Best Practices

  1. Keep examples focused: Each example should demonstrate one feature or use case

  2. Use descriptive prompts: Make it clear what the example is testing

  3. Handle errors gracefully: The run() wrapper handles this automatically

  4. Use realistic model IDs: Use actual model IDs that work with the provider

  5. Add comments for complex logic: Explain non-obvious code patterns

  6. Reuse tools when appropriate: Use weatherTool or create new reusable tools in tools/

Related Skills

Looking for an alternative to develop-ai-functions-example or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication