KS
Killer-Skills

td-svm — how to use td-svm how to use td-svm, td-svm alternative, td-svm setup guide, what is td-svm, td-svm vs scikit-learn, td-svm install, Teradata Support Vector Machine, classification analytics with td-svm

v1.0.0
GitHub

About this Skill

Ideal for Data Science Agents requiring advanced linear and non-linear classification capabilities using Support Vector Machine td-svm is a collection of notebooks and recipes for effective AI-driven classification analytics with Teradata Support Vector Machine.

Features

Complete analytical workflow from data exploration to model deployment
Automated preprocessing including scaling, encoding, and train-test splitting
Advanced TD_SVM implementation for linear and non-linear classification
Supports data exploration and model deployment using Teradata
Enables automated preprocessing for efficient data preparation

# Core Topics

teradata-labs teradata-labs
[0]
[0]
Updated: 3/7/2026

Quality Score

Top 5%
54
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add teradata-labs/claude-cookbooks/td-svm

Agent Capability Analysis

The td-svm MCP Server by teradata-labs is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use td-svm, td-svm alternative, td-svm setup guide.

Ideal Agent Persona

Ideal for Data Science Agents requiring advanced linear and non-linear classification capabilities using Support Vector Machine

Core Value

Empowers agents to perform comprehensive analytical workflows for classification using Teradata Support Vector Machine, including automated preprocessing and advanced TD_SVM implementation, leveraging train-test splitting and scaling for accurate model deployment

Capabilities Granted for td-svm MCP Server

Automating classification model development with TD_SVM
Generating insights from large datasets using Teradata
Deploying predictive models with advanced preprocessing

! Prerequisites & Limits

  • Requires Teradata environment
  • Specific to classification analytics
Project
SKILL.md
6.7 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Teradata Support Vector Machine

Skill NameTeradata Support Vector Machine
DescriptionSupport Vector Machine for linear and non-linear classification
CategoryClassification Analytics
FunctionTD_SVM

Core Capabilities

  • Complete analytical workflow from data exploration to model deployment
  • Automated preprocessing including scaling, encoding, and train-test splitting
  • Advanced TD_SVM implementation with parameter optimization
  • Comprehensive evaluation metrics and model validation
  • Production-ready SQL generation with proper table management
  • Error handling and data quality checks throughout the pipeline
  • Business-focused interpretation of analytical results

Table Analysis Workflow

This skill automatically analyzes your provided table to generate optimized SQL workflows. Here's how it works:

1. Table Structure Analysis

  • Column Detection: Automatically identifies all columns and their data types
  • Data Type Classification: Distinguishes between numeric, categorical, and text columns
  • Primary Key Identification: Detects unique identifier columns
  • Missing Value Assessment: Analyzes data completeness

2. Feature Engineering Recommendations

  • Numeric Features: Identifies columns suitable for scaling and normalization
  • Categorical Features: Detects columns requiring encoding (one-hot, label encoding)
  • Target Variable: Helps identify the dependent variable for modeling
  • Feature Selection: Recommends relevant features based on data types

3. SQL Generation Process

  • Dynamic Column Lists: Generates column lists based on your table structure
  • Parameterized Queries: Creates flexible SQL templates using your table schema
  • Table Name Integration: Replaces placeholders with your actual table names
  • Database Context: Adapts to your database and schema naming conventions

How to Use This Skill

  1. Provide Your Table Information:

    "Analyze table: database_name.table_name"
    or
    "Use table: my_data with target column: target_var"
    
  2. The Skill Will:

    • Query your table structure using SHOW COLUMNS FROM table_name
    • Analyze data types and suggest appropriate preprocessing
    • Generate complete SQL workflow with your specific column names
    • Provide optimized parameters based on your data characteristics

Input Requirements

Data Requirements

  • Source table: Teradata table with analytical data
  • Target column: Dependent variable for classification analysis
  • Feature columns: Independent variables (numeric and categorical)
  • ID column: Unique identifier for record tracking
  • Minimum sample size: 100+ records for reliable classification modeling

Technical Requirements

  • Teradata Vantage with ClearScape Analytics enabled
  • Database permissions: CREATE, DROP, SELECT on working database
  • Function access: TD_SVM, SVMSparsePredict

Output Formats

Generated Tables

  • Preprocessed data tables with proper scaling and encoding
  • Train/test split tables for model validation
  • Model table containing trained TD_SVM parameters
  • Prediction results with confidence metrics
  • Evaluation metrics table with performance statistics

SQL Scripts

  • Complete workflow scripts ready for execution
  • Parameterized queries for different datasets
  • Table management with proper cleanup procedures

Classification Use Cases Supported

  1. Non-linear classification: Comprehensive analysis workflow
  2. High-dimensional data: Comprehensive analysis workflow
  3. Kernel methods: Comprehensive analysis workflow

Best Practices Applied

  • Data validation before analysis execution
  • Proper feature scaling and categorical encoding
  • Train-test splitting with stratification when appropriate
  • Cross-validation for robust model evaluation
  • Parameter optimization using systematic approaches
  • Residual analysis and diagnostic checks
  • Business interpretation of statistical results
  • Documentation of methodology and assumptions

Example Usage

sql
1-- Example workflow for Teradata Support Vector Machine 2-- Replace 'your_table' with actual table name 3 4-- 1. Data exploration and validation 5SELECT COUNT(*), 6 COUNT(DISTINCT your_id_column), 7 AVG(your_target_column), 8 STDDEV(your_target_column) 9FROM your_database.your_table; 10 11-- 2. Execute complete classification workflow 12-- (Detailed SQL provided by the skill)

Scripts Included

Core Analytics Scripts

  • preprocessing.sql: Data preparation and feature engineering
  • table_analysis.sql: Automatic table structure analysis
  • complete_workflow_template.sql: End-to-end workflow template
  • model_training.sql: TD_SVM training procedures
  • prediction.sql: SVMSparsePredict execution
  • evaluation.sql: Model validation and metrics calculation

Utility Scripts

  • data_quality_checks.sql: Comprehensive data validation
  • parameter_tuning.sql: Systematic parameter optimization
  • diagnostic_queries.sql: Model diagnostics and interpretation

Limitations and Disclaimers

  • Data quality: Results depend on input data quality and completeness
  • Sample size: Minimum sample size requirements for reliable results
  • Feature selection: Manual feature engineering may be required
  • Computational resources: Large datasets may require optimization
  • Business context: Statistical results require domain expertise for interpretation
  • Model assumptions: Understand underlying mathematical assumptions

Quality Checks

Automated Validations

  • Data completeness verification before analysis
  • Statistical assumptions testing where applicable
  • Model convergence monitoring during training
  • Prediction quality assessment using validation data
  • Performance metrics calculation and interpretation

Manual Review Points

  • Feature selection appropriateness for business problem
  • Model interpretation alignment with domain knowledge
  • Results validation against business expectations
  • Documentation completeness for reproducibility

Updates and Maintenance

  • Version compatibility: Tested with latest Teradata Vantage releases
  • Performance optimization: Regular query performance reviews
  • Best practices: Updated based on analytics community feedback
  • Documentation: Maintained with latest ClearScape Analytics features
  • Examples: Updated with real-world use cases and scenarios

This skill provides production-ready classification analytics using Teradata ClearScape Analytics TD_SVM with comprehensive data science best practices.

Related Skills

Looking for an alternative to td-svm or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication