PostgreSQL to TimescaleDB Hypertable Migration
Migrate identified PostgreSQL tables to TimescaleDB hypertables with optimal configuration, migration planning and validation.
Prerequisites: Tables already identified as hypertable candidates (use companion "find-hypertable-candidates" skill if needed).
Step 1: Optimal Configuration
Partition Column Selection
sql1-- Find potential partition columns 2SELECT column_name, data_type, is_nullable 3FROM information_schema.columns 4WHERE table_name = 'your_table_name' 5 AND data_type IN ('timestamp', 'timestamptz', 'bigint', 'integer', 'date') 6ORDER BY ordinal_position;
Requirements: Time-based (TIMESTAMP/TIMESTAMPTZ/DATE) or sequential integer (INT/BIGINT)
Should represent when the event actually occurred or sequential ordering.
Common choices:
timestamp,created_at,event_time- when event occurredid,sequence_number- auto-increment (for sequential data without timestamps)ingested_at- less ideal, only if primary query dimensionupdated_at- AVOID (records updated out of order, breaks chunk distribution) unless primary query dimension
Special Case: table with BOTH ID AND Timestamp
When table has sequential ID (PK) AND timestamp that correlate:
sql1-- Partition by ID, enable minmax sparse indexes on timestamp 2SELECT create_hypertable('orders', 'id', chunk_time_interval => 1000000); 3ALTER TABLE orders SET ( 4 timescaledb.sparse_index = 'minmax(created_at),...' 5);
Sparse indexes on time column enable skipping compressed blocks outside queried time ranges.
Use when: ID correlates with time (newer records have higher IDs), need ID-based lookups, time queries also common
Chunk Interval Selection
sql1-- Ensure statistics are current 2ANALYZE your_table_name; 3 4-- Estimate index size per time unit 5WITH time_range AS ( 6 SELECT 7 MIN(timestamp_column) as min_time, 8 MAX(timestamp_column) as max_time, 9 EXTRACT(EPOCH FROM (MAX(timestamp_column) - MIN(timestamp_column)))/3600 as total_hours 10 FROM your_table_name 11), 12total_index_size AS ( 13 SELECT SUM(pg_relation_size(indexname::regclass)) as total_index_bytes 14 FROM pg_stat_user_indexes 15 WHERE schemaname||'.'||tablename = 'your_schema.your_table_name' 16) 17SELECT 18 pg_size_pretty(tis.total_index_bytes / tr.total_hours) as index_size_per_hour 19FROM time_range tr, total_index_size tis;
Target: Indexes of recent chunks < 25% of RAM Default: IMPORTANT: Keep default of 7 days if unsure Range: 1 hour minimum, 30 days maximum
Example: 32GB RAM → target 8GB for recent indexes. If index_size_per_hour = 200MB:
- 1 hour chunks: 200MB chunk index size × 40 recent = 8GB ✓
- 6 hour chunks: 1.2GB chunk index size × 7 recent = 8.4GB ✓
- 1 day chunks: 4.8GB chunk index size × 2 recent = 9.6GB ⚠️ Choose largest interval keeping 2+ recent chunk indexes under target.
Primary Key/ Unique Constraints Compatibility
sql1-- Check existing primary key/ unique constraints 2SELECT conname, pg_get_constraintdef(oid) as definition 3FROM pg_constraint 4WHERE conrelid = 'your_table_name'::regclass AND contype = 'p' OR contype = 'u';
Rules: PK/UNIQUE must include partition column
Actions:
- No PK/UNIQUE: No changes needed
- PK/UNIQUE includes partition column: No changes needed
- PK/UNIQUE excludes partition column: ⚠️ ASK USER PERMISSION to modify PK/UNIQUE
Example: user prompt if needed:
"Primary key (id) doesn't include partition column (timestamp). Must modify to PRIMARY KEY (id, timestamp) to convert to hypertable. This may break application code. Is this acceptable?" "Unique constraint (id) doesn't include partition column (timestamp). Must modify to UNIQUE (id, timestamp) to convert to hypertable. This may break application code. Is this acceptable?"
If the user accepts, modify the constraint:
sql1BEGIN; 2ALTER TABLE your_table_name DROP CONSTRAINT existing_pk_name; 3ALTER TABLE your_table_name ADD PRIMARY KEY (existing_columns, partition_column); 4COMMIT;
If the user does not accept, you should NOT migrate the table.
IMPORTANT: DO NOT modify the primary key/unique constraint without user permission.
Compression Configuration
For detailed segment_by and order_by selection, see "setup-timescaledb-hypertables" skill. Quick reference:
segment_by: Most common WHERE filter with >100 rows per value per chunk
- IoT:
device_id - Finance:
symbol - Analytics:
user_idorsession_id
sql1-- Analyze cardinality for segment_by selection 2SELECT column_name, COUNT(DISTINCT column_name) as unique_values, 3 ROUND(COUNT(*)::float / COUNT(DISTINCT column_name), 2) as avg_rows_per_value 4FROM your_table_name GROUP BY column_name;
order_by: Usually timestamp DESC. The (segment_by, order_by) combination should form a natural time-series progression.
- If column has <100 rows/chunk (too low for segment_by), prepend to order_by:
order_by='low_density_col, timestamp DESC'
sparse indexes: add minmax on the columns that are used in the WHERE clauses but are not in the segment_by or order_by. Use minmax for columns used in range queries.
sql1ALTER TABLE your_table_name SET ( 2 timescaledb.enable_columnstore, 3 timescaledb.segmentby = 'entity_id', 4 timescaledb.orderby = 'timestamp DESC' 5 timescaledb.sparse_index = 'minmax(value_1),...' 6); 7 8-- Compress after data unlikely to change (adjust `after` parameter based on update patterns) 9CALL add_columnstore_policy('your_table_name', after => INTERVAL '7 days');
Step 2: Migration Planning
Pre-Migration Checklist
- Partition column selected
- Chunk interval calculated (or using default)
- PK includes partition column OR user approved modification
- No Hypertable→Hypertable foreign keys
- Unique constraints include partition column
- Created compression configuration (segment_by, order_by, sparse indexes, compression policy)
- Maintenance window scheduled / backup created.
Migration Options
Option 1: In-Place (Tables < 1GB)
sql1-- Enable extension 2CREATE EXTENSION IF NOT EXISTS timescaledb; 3 4-- Convert to hypertable (locks table) 5SELECT create_hypertable( 6 'your_table_name', 7 'timestamp_column', 8 chunk_time_interval => INTERVAL '7 days', 9 if_not_exists => TRUE 10); 11 12-- Configure compression 13ALTER TABLE your_table_name SET ( 14 timescaledb.enable_columnstore, 15 timescaledb.segmentby = 'entity_id', 16 timescaledb.orderby = 'timestamp DESC', 17 timescaledb.sparse_index = 'minmax(value_1),...' 18); 19 20-- Adjust `after` parameter based on update patterns 21CALL add_columnstore_policy('your_table_name', after => INTERVAL '7 days');
Option 2: Blue-Green (Tables > 1GB)
sql1-- 1. Create new hypertable 2CREATE TABLE your_table_name_new (LIKE your_table_name INCLUDING ALL); 3 4-- 2. Convert to hypertable 5SELECT create_hypertable('your_table_name_new', 'timestamp_column'); 6 7-- 3. Configure compression 8ALTER TABLE your_table_name_new SET ( 9 timescaledb.enable_columnstore, 10 timescaledb.segmentby = 'entity_id', 11 timescaledb.orderby = 'timestamp DESC' 12); 13 14-- 4. Migrate data in batches 15INSERT INTO your_table_name_new 16SELECT * FROM your_table_name 17WHERE timestamp_column >= '2024-01-01' AND timestamp_column < '2024-02-01'; 18-- Repeat for each time range 19 20-- 4. Enter maintenance window and do the following: 21 22-- 5. Pause modification of the old table. 23 24-- 6. Copy over the most recent data from the old table to the new table. 25 26-- 7. Swap tables 27BEGIN; 28ALTER TABLE your_table_name RENAME TO your_table_name_old; 29ALTER TABLE your_table_name_new RENAME TO your_table_name; 30COMMIT; 31 32-- 8. Exit maintenance window. 33 34-- 9. (sometime much later) Drop old table after validation 35-- DROP TABLE your_table_name_old;
Common Issues
Foreign Keys
sql1-- Check foreign keys 2SELECT conname, confrelid::regclass as referenced_table 3FROM pg_constraint 4WHERE (conrelid = 'your_table_name'::regclass 5 OR confrelid = 'your_table_name'::regclass) 6 AND contype = 'f';
Supported: Plain→Hypertable, Hypertable→Plain NOT supported: Hypertable→Hypertable
⚠️ CRITICAL: Hypertable→Hypertable FKs must be dropped (enforce in application). ASK USER PERMISSION. If no, STOP MIGRATION.
Large Table Migration Time
sql1-- Rough estimate: ~75k rows/second 2SELECT 3 pg_size_pretty(pg_total_relation_size(tablename)) as size, 4 n_live_tup as rows, 5 ROUND(n_live_tup / 75000.0 / 60, 1) as estimated_minutes 6FROM pg_stat_user_tables 7WHERE tablename = 'your_table_name';
Solutions for large tables (>1GB/10M rows): Use blue-green migration, migrate during off-peak, test on subset first
Step 3: Performance Validation
Chunk & Compression Analysis
sql1-- View chunks and compression 2SELECT 3 chunk_name, 4 pg_size_pretty(total_bytes) as size, 5 pg_size_pretty(compressed_total_bytes) as compressed_size, 6 ROUND((total_bytes - compressed_total_bytes::numeric) / total_bytes * 100, 1) as compression_pct, 7 range_start, 8 range_end 9FROM timescaledb_information.chunks 10WHERE hypertable_name = 'your_table_name' 11ORDER BY range_start DESC;
Look for:
- Consistent chunk sizes (within 2x)
- Compression >90% for time-series
- Recent chunks uncompressed
- Chunk indexes < 25% RAM
Query Performance Tests
sql1-- 1. Time-range query (should show chunk exclusion) 2EXPLAIN (ANALYZE, BUFFERS) 3SELECT COUNT(*), AVG(value) 4FROM your_table_name 5WHERE timestamp >= NOW() - INTERVAL '1 day'; 6 7-- 2. Entity + time query (benefits from segment_by) 8EXPLAIN (ANALYZE, BUFFERS) 9SELECT * FROM your_table_name 10WHERE entity_id = 'X' AND timestamp >= NOW() - INTERVAL '1 week'; 11 12-- 3. Aggregation (benefits from columnstore) 13EXPLAIN (ANALYZE, BUFFERS) 14SELECT DATE_TRUNC('hour', timestamp), entity_id, COUNT(*), AVG(value) 15FROM your_table_name 16WHERE timestamp >= NOW() - INTERVAL '1 month' 17GROUP BY 1, 2;
✅ Good signs:
- "Chunks excluded during startup: X" in EXPLAIN plan
- "Custom Scan (ColumnarScan)" for compressed data
- Lower "Buffers: shared read" in EXPLAIN ANALYZE plan than pre-migration
- Faster execution times
❌ Bad signs:
- "Seq Scan" on large chunks
- No chunk exclusion messages
- Slower than before migration
Storage Metrics
sql1-- Monitor compression effectiveness 2SELECT 3 hypertable_name, 4 pg_size_pretty(total_bytes) as total_size, 5 pg_size_pretty(compressed_total_bytes) as compressed_size, 6 ROUND(compressed_total_bytes::numeric / total_bytes * 100, 1) as compressed_pct_of_total, 7 ROUND((uncompressed_total_bytes - compressed_total_bytes::numeric) / 8 uncompressed_total_bytes * 100, 1) as compression_ratio_pct 9FROM timescaledb_information.hypertables 10WHERE hypertable_name = 'your_table_name';
Monitor:
- compression_ratio_pct >90% (typical time-series)
- compressed_pct_of_total growing as data ages
- Size growth slowing significantly vs pre-hypertable
- Decreasing compression_ratio_pct = poor segment_by
Troubleshooting
Poor Chunk Exclusion
sql1-- Verify chunks are being excluded 2EXPLAIN (ANALYZE, BUFFERS) 3SELECT * FROM your_table_name 4WHERE timestamp >= '2024-01-01' AND timestamp < '2024-01-02'; 5-- Look for "Chunks excluded during startup: X"
Poor Compression
sql1-- Get newest compressed chunk name 2SELECT chunk_name FROM timescaledb_information.chunks 3WHERE hypertable_name = 'your_table_name' 4 AND compressed_total_bytes IS NOT NULL 5ORDER BY range_start DESC LIMIT 1; 6 7-- Analyze segment distribution 8SELECT segment_by_column, COUNT(*) as rows_per_segment 9FROM _timescaledb_internal._hyper_X_Y_chunk -- Use actual chunk name 10GROUP BY 1 ORDER BY 2 DESC;
Look for: <20 rows per segment: Poor segment_by choice (should be >100) => Low compression potential.
Poor insert performance
Check that you don't have too many indexes. Unused indexes hurt insert performance and should be dropped.
sql1SELECT 2 schemaname, 3 tablename, 4 indexname, 5 idx_tup_read, 6 idx_tup_fetch, 7 idx_scan 8FROM pg_stat_user_indexes 9WHERE tablename LIKE '%your_table_name%' 10ORDER BY idx_scan DESC;
Look for: Unused indexes via a low idx_scan value. Drop such indexes (but ask user permission).
Ongoing Monitoring
sql1-- Monitor chunk compression status 2CREATE OR REPLACE VIEW hypertable_compression_status AS 3SELECT 4 h.hypertable_name, 5 COUNT(c.chunk_name) as total_chunks, 6 COUNT(c.chunk_name) FILTER (WHERE c.compressed_total_bytes IS NOT NULL) as compressed_chunks, 7 ROUND( 8 COUNT(c.chunk_name) FILTER (WHERE c.compressed_total_bytes IS NOT NULL)::numeric / 9 COUNT(c.chunk_name) * 100, 1 10 ) as compression_coverage_pct, 11 pg_size_pretty(SUM(c.total_bytes)) as total_size, 12 pg_size_pretty(SUM(c.compressed_total_bytes)) as compressed_size 13FROM timescaledb_information.hypertables h 14LEFT JOIN timescaledb_information.chunks c ON h.hypertable_name = c.hypertable_name 15GROUP BY h.hypertable_name; 16 17-- Query this view regularly to monitor compression progress 18SELECT * FROM hypertable_compression_status 19WHERE hypertable_name = 'your_table_name';
Look for:
- compression_coverage_pct should increase over time as data ages and gets compressed.
- total_chunks should not grow too quickly (more than 10000 becomes a problem).
- You should not see unexpected spikes in total_size or compressed_size.
Success Criteria
✅ Migration successful when:
- All queries return correct results
- Query performance equal or better
- Compression >90% for older data
- Chunk exclusion working for time queries
- Insert performance acceptable
❌ Investigate if:
- Query performance >20% worse
- Compression <80%
- No chunk exclusion
- Insert performance degraded
- Increased error rates
Focus on high-volume, insert-heavy workloads with time-based access patterns for best ROI.