KS
Killer-Skills

openai-patterns — how to use openai-patterns how to use openai-patterns, openai-patterns setup guide, openai-patterns alternative, openai-patterns vs Anthropic SDK, openai-patterns install, what is openai-patterns, openai-patterns dual-model strategy, openai-patterns API configuration, openai-patterns local LLM setup

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing automatic report generation and task automation via the OpenAI API. openai-patterns is a skill that utilizes the OpenAI API to create automatic reports, leveraging a dual-model strategy for efficient detection and extraction tasks.

Features

Utilizes the OpenAI API with package version ^4.0.0
Employs a client factory from backend/src/services/openai/client-factory.ts
Supports configuration via OPENAI_API_KEY in .env or OPENAI_BASE_URL for local LLM
Deploys a dual-model strategy for detection and extraction tasks
Uses the gpt-4o-mini model for classification and low-cost detection
Extracts atomic facts with efficient processing

# Core Topics

77DidO 77DidO
[1]
[0]
Updated: 2/26/2026

Quality Score

Top 5%
42
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add 77DidO/CRAutomatique2/openai-patterns

Agent Capability Analysis

The openai-patterns MCP Server by 77DidO is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use openai-patterns, openai-patterns setup guide, openai-patterns alternative.

Ideal Agent Persona

Perfect for AI Agents needing automatic report generation and task automation via the OpenAI API.

Core Value

Empowers agents to automate detection and extraction tasks using the OpenAI API, specifically with models like `gpt-4o-mini`, and leverage client factories for seamless integration, all configurable via `.env` files and compatible with `@azure/openai`.

Capabilities Granted for openai-patterns MCP Server

Automating detection of reunion types
Extracting atomic facts using dual-model strategies
Generating automatic reports with comprehensive content analysis

! Prerequisites & Limits

  • Requires OpenAI API Key stored in `.env`
  • Dependent on `openai` package version ^4.0.0
  • Limited to 3000 characters max for simple classification tasks
Project
SKILL.md
3.8 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Patterns OpenAI — CRAutomatique2

Package et client

  • Package : openai ^4.0.0 (PAS Anthropic SDK, PAS @azure/openai)
  • Client factory : backend/src/services/openai/client-factory.ts
  • Configuration : OPENAI_API_KEY dans .env (ou vLLM local via OPENAI_BASE_URL)

Strategie dual-model

UsageModelePourquoi
Detection type reuniongpt-4o-miniClassification simple, low-cost, 3000 chars max
Extraction facts atomiquesgpt-4o-miniJSON structure, temperature 0
Generation brief completgpt-4oQualite editoriale, structured output

Structured output

typescript
1// Detection (JSON simple) 2const response = await client.chat.completions.create({ 3 model: 'gpt-4o-mini', 4 messages: [{ role: 'user', content: prompt }], 5 response_format: { type: 'json_object' }, 6 temperature: 0, 7}); 8 9// Generation brief (schema Zod) 10const response = await client.chat.completions.create({ 11 model: 'gpt-4o', 12 messages: [{ role: 'system', content: systemPrompt }, { role: 'user', content: userPrompt }], 13 response_format: { type: 'json_object' }, 14 temperature: 0.1, 15}); 16const brief = BriefStructuredSchema.parse(JSON.parse(content));

Schemas Zod definis dans backend/src/types/brief.ts :

  • BriefStructuredSchema — schema principal
  • BriefMetadataSchema — metadata (meetingType, date, duration)
  • ComexContentSchema, CodirContentSchema, EncoursContentSchema — contenu par type

Anti-hallucination (3 couches)

Couche 1 — Pre-extraction deterministe (AVANT le LLM)

  • Fichier : backend/src/services/pre-extraction.ts
  • Scan regex : montants financiers (avec interpretations brut/x1000/x1M), dates, personnes, entites
  • Output : pre-extracted-facts.json dans chaque job
  • Inject : buildFactsConstraintSection() produit un bloc de contraintes injecte dans le prompt

Couche 2 — Post-processings deterministes (APRES le LLM)

  • ~70 sanitizers/enrichers/guards dans applyComexPostProcessings()
  • Corrigent les erreurs LLM : kind budget, revenue, attributions, formulations
  • Voir skill /comex-rules pour la chaine complete

Couche 3 — Validation post-extraction

  • Fichier : backend/src/services/openai/validation.ts
  • Verifie montants financiers du brief contre la transcription
  • Valide noms responsables contre voice profiles
  • Output : validation-report.json dans chaque job

Prompts

Chaque type de reunion a son prompt dedie :

  • prompts/detect-meeting-type.ts — detection (excerpt 3000 chars)
  • prompts/generate-comex.ts — generation brief COMEX
  • prompts/generate-codir.ts — generation brief CODIR
  • prompts/generate-chantier.ts — generation brief ENCOURS
  • prompts/extract-chantier-facts.ts — extraction facts ENCOURS

Les prompts recoivent les contraintes pre-extraites via buildFactsConstraintSection().

Regles

  • Ne jamais faire confiance au LLM sur les champs structurels (budget.kind, attributions)
  • Temperature 0 pour classification/extraction, 0.1 pour generation editoriale
  • Pas de retry auto cote app — le pipeline gere les erreurs via safe() wrapper
  • Toujours valider la sortie JSON avec le schema Zod avant utilisation
  • Tronquer la transcription a la taille contexte du modele (32768 tokens pour vLLM local)

Fichiers cles

  • backend/src/services/openai/client-factory.ts — creation client OpenAI
  • backend/src/services/openai/structured-generation.ts — orchestration generation + post-processings
  • backend/src/services/openai/prompts/ — tous les prompts par type
  • backend/src/services/openai/validation.ts — validation anti-hallucination
  • backend/src/services/pre-extraction.ts — pre-extraction deterministe
  • backend/src/types/brief.ts — schemas Zod (BriefStructured, ComexContent, etc.)

Related Skills

Looking for an alternative to openai-patterns or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication