KS
Killer-Skills

opengradient — how to use opengradient how to use opengradient, opengradient alternative, opengradient install, what is opengradient, opengradient vs TensorFlow, opengradient setup guide, opengradient Python SDK tutorial, verifiable AI inference with opengradient, OpenGradient Python SDK examples

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing verifiable AI inference and comprehensive content analysis with OpenGradient opengradient is a Python SDK for verifiable AI inference on OpenGradient, offering a simple approach to building AI-powered applications

Features

Generates working code using the OpenGradient Python SDK
Follows patterns from the examples folder for every feature
Utilizes tutorials from the tutorials folder for detailed guidance
Preferably uses the simplest approach to satisfy requirements
Reads key reference files for more detailed information
Runs runnable scripts for every feature from the examples folder

# Core Topics

OpenGradient OpenGradient
[80]
[13]
Updated: 3/2/2026

Quality Score

Top 5%
65
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add OpenGradient/OpenGradient-SDK/opengradient

Agent Capability Analysis

The opengradient MCP Server by OpenGradient is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use opengradient, opengradient alternative, opengradient install.

Ideal Agent Persona

Perfect for AI Agents needing verifiable AI inference and comprehensive content analysis with OpenGradient

Core Value

Empowers agents to build correct and idiomatic code using the OpenGradient Python SDK, providing verifiable AI inference and benefiting from comprehensive content analysis, with features such as Python SDK and access to tutorials and examples

Capabilities Granted for opengradient MCP Server

Building verifiable AI models
Generating idiomatic code with OpenGradient Python SDK
Analyzing comprehensive content with OpenGradient

! Prerequisites & Limits

  • Requires OpenGradient account
  • Python 3.x compatible only
  • Needs access to OpenGradient tutorials and examples
Project
SKILL.md
7.0 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

You are an expert on the OpenGradient Python SDK (opengradient). Help the user write correct, idiomatic code using the SDK.

When the user describes what they want to build, generate working code that follows the patterns below. Always prefer the simplest approach that satisfies the requirements.

Key Reference Files

When you need more detail, read these files from the project:

  • Examples: examples/ folder (runnable scripts for every feature)
  • Tutorials: tutorials/ folder (step-by-step walkthroughs)
  • Types & Enums: src/opengradient/types.py
  • Client API: src/opengradient/client/client.py
  • LLM API: src/opengradient/client/llm.py
  • Alpha API: src/opengradient/client/alpha.py
  • LangChain adapter: src/opengradient/agents/__init__.py

Also read the detailed API reference bundled with this skill at api-reference.md in this skill's directory.

SDK Overview

OpenGradient is a decentralized AI inference platform. The SDK provides:

  • Verified LLM inference via TEE (Trusted Execution Environment)
  • x402 payment settlement on Base Sepolia (on-chain receipts)
  • Multi-provider models (OpenAI, Anthropic, Google, xAI) through a unified API
  • On-chain ONNX model inference (alpha features)
  • LangChain integration for building agents
  • Digital twins chat

Initialization

python
1import opengradient as og 2 3client = og.init( 4 private_key="0x...", # Required: Base Sepolia key with OPG tokens 5 alpha_private_key="0x...", # Optional: OpenGradient testnet key 6 email="...", # Optional: Model Hub auth 7 password="...", # Optional: Model Hub auth 8 twins_api_key="...", # Optional: Digital twins 9)

Before the first LLM call, approve OPG token spending (idempotent):

python
1client.llm.ensure_opg_approval(opg_amount=5)

Available Models (og.TEE_LLM)

ProviderModels
OpenAIGPT_4_1_2025_04_14, O4_MINI, GPT_5, GPT_5_MINI, GPT_5_2
AnthropicCLAUDE_SONNET_4_5, CLAUDE_SONNET_4_6, CLAUDE_HAIKU_4_5, CLAUDE_OPUS_4_5, CLAUDE_OPUS_4_6
GoogleGEMINI_2_5_FLASH, GEMINI_2_5_PRO, GEMINI_2_5_FLASH_LITE, GEMINI_3_PRO, GEMINI_3_FLASH
xAIGROK_4, GROK_4_FAST, GROK_4_1_FAST, GROK_4_1_FAST_NON_REASONING

Settlement Modes (og.x402SettlementMode)

  • SETTLE — Hashes only (maximum privacy)
  • SETTLE_METADATA — Full data on-chain (maximum transparency)
  • SETTLE_BATCH — Aggregated hashes (most cost-efficient, default)

Core Patterns

Basic Chat

python
1result = client.llm.chat( 2 model=og.TEE_LLM.GEMINI_2_5_FLASH, 3 messages=[{"role": "user", "content": "Hello!"}], 4 max_tokens=300, 5 temperature=0.0, 6) 7print(result.chat_output["content"])

Streaming

python
1stream = client.llm.chat( 2 model=og.TEE_LLM.GPT_4_1_2025_04_14, 3 messages=[{"role": "user", "content": "Explain quantum computing"}], 4 max_tokens=500, 5 stream=True, 6) 7for chunk in stream: 8 if chunk.choices[0].delta.content: 9 print(chunk.choices[0].delta.content, end="", flush=True)

Tool Calling

python
1tools = [{ 2 "type": "function", 3 "function": { 4 "name": "get_weather", 5 "description": "Get current weather", 6 "parameters": { 7 "type": "object", 8 "properties": {"city": {"type": "string"}}, 9 "required": ["city"], 10 }, 11 }, 12}] 13 14result = client.llm.chat( 15 model=og.TEE_LLM.GPT_5, 16 messages=[{"role": "user", "content": "Weather in NYC?"}], 17 tools=tools, 18 max_tokens=200, 19) 20 21if result.finish_reason == "tool_calls": 22 for tc in result.chat_output["tool_calls"]: 23 print(f"Call: {tc['function']['name']}({tc['function']['arguments']})")

Multi-Turn Tool Agent Loop

python
1messages = [ 2 {"role": "system", "content": "You are a helpful assistant."}, 3 {"role": "user", "content": user_query}, 4] 5 6for _ in range(max_iterations): 7 result = client.llm.chat( 8 model=og.TEE_LLM.GPT_5, 9 messages=messages, 10 tools=tools, 11 tool_choice="auto", 12 ) 13 if result.finish_reason == "tool_calls": 14 messages.append(result.chat_output) 15 for tc in result.chat_output["tool_calls"]: 16 tool_result = execute_tool(tc["function"]["name"], tc["function"]["arguments"]) 17 messages.append({ 18 "role": "tool", 19 "tool_call_id": tc["id"], 20 "content": tool_result, 21 }) 22 else: 23 final_answer = result.chat_output["content"] 24 break

LangChain ReAct Agent

python
1from langchain_core.tools import tool 2from langgraph.prebuilt import create_react_agent 3 4llm = og.agents.langchain_adapter( 5 private_key="0x...", 6 model_cid=og.TEE_LLM.GPT_4_1_2025_04_14, 7 max_tokens=300, 8) 9 10@tool 11def lookup(query: str) -> str: 12 """Look up information.""" 13 return "result" 14 15agent = create_react_agent(llm, [lookup]) 16result = agent.invoke({"messages": [("user", "Find info about X")]}) 17print(result["messages"][-1].content)

On-Chain ONNX Inference (Alpha)

python
1result = client.alpha.infer( 2 model_cid="QmbUqS93oc4JTLMHwpVxsE39mhNxy6hpf6Py3r9oANr8aZ", 3 inference_mode=og.InferenceMode.VANILLA, 4 model_input={"input": [1.0, 2.0, 3.0]}, 5) 6print(result.model_output) 7print(result.transaction_hash)

Digital Twins

python
1client = og.init(private_key="0x...", twins_api_key="your-key") 2 3result = client.twins.chat( 4 twin_id="0x1abd463fd6244be4a1dc0f69e0b70cd5", 5 model=og.TEE_LLM.GROK_4_1_FAST_NON_REASONING, 6 messages=[{"role": "user", "content": "What do you think about AI?"}], 7 max_tokens=1000, 8) 9print(result.chat_output["content"])

Model Hub: Upload a Model

python
1repo = client.model_hub.create_model( 2 model_name="my-model", 3 model_desc="A prediction model", 4 version="1.0.0", 5) 6upload = client.model_hub.upload( 7 model_name=repo.name, 8 version=repo.initialVersion, 9 model_path="./model.onnx", 10) 11print(f"Model CID: {upload.modelCid}")

Return Types

  • TextGenerationOutput: chat_output (dict), finish_reason, transaction_hash, payment_hash
  • TextGenerationStream: iterable of StreamChunk objects
  • StreamChunk: choices[0].delta.content, choices[0].delta.tool_calls, usage (final only), is_final
  • InferenceResult: model_output (dict of np.ndarray), transaction_hash

Guidelines

  1. Always call client.llm.ensure_opg_approval() before the first LLM inference.
  2. Handle finish_reason: "stop" / "length" = text response, "tool_calls" = function calls.
  3. For streaming, check chunk.choices[0].delta.content is not None before printing.
  4. In tool-calling loops, append result.chat_output as the assistant message, then append each tool result with role: "tool" and matching tool_call_id.
  5. Use environment variables or config files for private keys — never hardcode them.
  6. If you are unsure about a specific API detail, read the source files listed above.

Related Skills

Looking for an alternative to opengradient or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication