td-svm — for Claude Code td-svm, claude-cookbooks, community, for Claude Code, ide skills, Teradata, Support, Vector, Machine, Description

v1.0.0

About this Skill

Ideal for Data Science Agents requiring advanced linear and non-linear classification capabilities using Support Vector Machine Support Vector Machine for linear and non-linear classification

Features

Teradata Support Vector Machine
Skill Name Teradata Support Vector Machine
---------------- --------------
Description Support Vector Machine for linear and non-linear classification
Category Classification Analytics

# Core Topics

teradata-labs teradata-labs
[0]
[0]
Updated: 3/12/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 10/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
10/11
Quality Score
54
Canonical Locale
en
Detected Body Locale
en

Ideal for Data Science Agents requiring advanced linear and non-linear classification capabilities using Support Vector Machine Support Vector Machine for linear and non-linear classification

Core Value

Empowers agents to perform comprehensive analytical workflows for classification using Teradata Support Vector Machine, including automated preprocessing and advanced TD_SVM implementation, leveraging train-test splitting and scaling for accurate model deployment

Ideal Agent Persona

Ideal for Data Science Agents requiring advanced linear and non-linear classification capabilities using Support Vector Machine

Capabilities Granted for td-svm

Automating classification model development with TD_SVM
Generating insights from large datasets using Teradata
Deploying predictive models with advanced preprocessing

! Prerequisites & Limits

  • Requires Teradata environment
  • Specific to classification analytics

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is td-svm?

Ideal for Data Science Agents requiring advanced linear and non-linear classification capabilities using Support Vector Machine Support Vector Machine for linear and non-linear classification

How do I install td-svm?

Run the command: npx killer-skills add teradata-labs/claude-cookbooks/td-svm. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for td-svm?

Key use cases include: Automating classification model development with TD_SVM, Generating insights from large datasets using Teradata, Deploying predictive models with advanced preprocessing.

Which IDEs are compatible with td-svm?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for td-svm?

Requires Teradata environment. Specific to classification analytics.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add teradata-labs/claude-cookbooks/td-svm. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use td-svm immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

td-svm

Install td-svm, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Teradata Support Vector Machine

Skill NameTeradata Support Vector Machine
DescriptionSupport Vector Machine for linear and non-linear classification
CategoryClassification Analytics
FunctionTD_SVM

Core Capabilities

  • Complete analytical workflow from data exploration to model deployment
  • Automated preprocessing including scaling, encoding, and train-test splitting
  • Advanced TD_SVM implementation with parameter optimization
  • Comprehensive evaluation metrics and model validation
  • Production-ready SQL generation with proper table management
  • Error handling and data quality checks throughout the pipeline
  • Business-focused interpretation of analytical results

Table Analysis Workflow

This skill automatically analyzes your provided table to generate optimized SQL workflows. Here's how it works:

1. Table Structure Analysis

  • Column Detection: Automatically identifies all columns and their data types
  • Data Type Classification: Distinguishes between numeric, categorical, and text columns
  • Primary Key Identification: Detects unique identifier columns
  • Missing Value Assessment: Analyzes data completeness

2. Feature Engineering Recommendations

  • Numeric Features: Identifies columns suitable for scaling and normalization
  • Categorical Features: Detects columns requiring encoding (one-hot, label encoding)
  • Target Variable: Helps identify the dependent variable for modeling
  • Feature Selection: Recommends relevant features based on data types

3. SQL Generation Process

  • Dynamic Column Lists: Generates column lists based on your table structure
  • Parameterized Queries: Creates flexible SQL templates using your table schema
  • Table Name Integration: Replaces placeholders with your actual table names
  • Database Context: Adapts to your database and schema naming conventions

How to Use This Skill

  1. Provide Your Table Information:

    "Analyze table: database_name.table_name"
    or
    "Use table: my_data with target column: target_var"
    
  2. The Skill Will:

    • Query your table structure using SHOW COLUMNS FROM table_name
    • Analyze data types and suggest appropriate preprocessing
    • Generate complete SQL workflow with your specific column names
    • Provide optimized parameters based on your data characteristics

Input Requirements

Data Requirements

  • Source table: Teradata table with analytical data
  • Target column: Dependent variable for classification analysis
  • Feature columns: Independent variables (numeric and categorical)
  • ID column: Unique identifier for record tracking
  • Minimum sample size: 100+ records for reliable classification modeling

Technical Requirements

  • Teradata Vantage with ClearScape Analytics enabled
  • Database permissions: CREATE, DROP, SELECT on working database
  • Function access: TD_SVM, SVMSparsePredict

Output Formats

Generated Tables

  • Preprocessed data tables with proper scaling and encoding
  • Train/test split tables for model validation
  • Model table containing trained TD_SVM parameters
  • Prediction results with confidence metrics
  • Evaluation metrics table with performance statistics

SQL Scripts

  • Complete workflow scripts ready for execution
  • Parameterized queries for different datasets
  • Table management with proper cleanup procedures

Classification Use Cases Supported

  1. Non-linear classification: Comprehensive analysis workflow
  2. High-dimensional data: Comprehensive analysis workflow
  3. Kernel methods: Comprehensive analysis workflow

Best Practices Applied

  • Data validation before analysis execution
  • Proper feature scaling and categorical encoding
  • Train-test splitting with stratification when appropriate
  • Cross-validation for robust model evaluation
  • Parameter optimization using systematic approaches
  • Residual analysis and diagnostic checks
  • Business interpretation of statistical results
  • Documentation of methodology and assumptions

Example Usage

sql
1-- Example workflow for Teradata Support Vector Machine 2-- Replace 'your_table' with actual table name 3 4-- 1. Data exploration and validation 5SELECT COUNT(*), 6 COUNT(DISTINCT your_id_column), 7 AVG(your_target_column), 8 STDDEV(your_target_column) 9FROM your_database.your_table; 10 11-- 2. Execute complete classification workflow 12-- (Detailed SQL provided by the skill)

Scripts Included

Core Analytics Scripts

  • preprocessing.sql: Data preparation and feature engineering
  • table_analysis.sql: Automatic table structure analysis
  • complete_workflow_template.sql: End-to-end workflow template
  • model_training.sql: TD_SVM training procedures
  • prediction.sql: SVMSparsePredict execution
  • evaluation.sql: Model validation and metrics calculation

Utility Scripts

  • data_quality_checks.sql: Comprehensive data validation
  • parameter_tuning.sql: Systematic parameter optimization
  • diagnostic_queries.sql: Model diagnostics and interpretation

Limitations and Disclaimers

  • Data quality: Results depend on input data quality and completeness
  • Sample size: Minimum sample size requirements for reliable results
  • Feature selection: Manual feature engineering may be required
  • Computational resources: Large datasets may require optimization
  • Business context: Statistical results require domain expertise for interpretation
  • Model assumptions: Understand underlying mathematical assumptions

Quality Checks

Automated Validations

  • Data completeness verification before analysis
  • Statistical assumptions testing where applicable
  • Model convergence monitoring during training
  • Prediction quality assessment using validation data
  • Performance metrics calculation and interpretation

Manual Review Points

  • Feature selection appropriateness for business problem
  • Model interpretation alignment with domain knowledge
  • Results validation against business expectations
  • Documentation completeness for reproducibility

Updates and Maintenance

  • Version compatibility: Tested with latest Teradata Vantage releases
  • Performance optimization: Regular query performance reviews
  • Best practices: Updated based on analytics community feedback
  • Documentation: Maintained with latest ClearScape Analytics features
  • Examples: Updated with real-world use cases and scenarios

This skill provides production-ready classification analytics using Teradata ClearScape Analytics TD_SVM with comprehensive data science best practices.

Related Skills

Looking for an alternative to td-svm or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

openclaw-release-maintainer is an AI agent skill for openclaw release maintainer.

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

flags is an AI agent skill for use this skill when adding or changing framework feature flags in next.js internals.

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

pr-review is an AI agent skill for pytorch pr review skill.

98.6k
0
Developer