nixtla-timegpt-finetune-lab
Enables TimeGPT model fine-tuning on custom datasets with Nixtla SDK. Guides dataset preparation, job submission, status monitoring, model comparison, and accuracy benchmarking. Activates when user needs TimeGPT fine-tuning, custom model training, domain-specific optimization, or zero-shot vs fine-tuned comparison.
When & Why to Use This Skill
This Claude skill streamlines the end-to-end workflow for fine-tuning Nixtla's TimeGPT models on custom datasets. It automates the technical complexities of time-series forecasting optimization, including data validation, API job management, and rigorous performance benchmarking. By facilitating the transition from zero-shot forecasting to domain-specific models, it helps data scientists and developers achieve higher predictive accuracy with minimal manual overhead.
Use Cases
- Retail Demand Forecasting: Fine-tuning models on historical sales data to capture unique seasonal patterns and local consumer behavior for better inventory management.
- Financial Market Analysis: Optimizing TimeGPT for specific asset classes or high-frequency trading data to improve trend prediction and risk assessment.
- Energy Grid Management: Training models on regional utility usage and weather data to provide more precise load forecasting for infrastructure planning.
- Model Performance Benchmarking: Systematically comparing out-of-the-box zero-shot results against fine-tuned versions to quantify accuracy gains and justify production deployment.
| name | nixtla-timegpt-finetune-lab |
|---|---|
| description | Enables TimeGPT model fine-tuning on custom datasets with Nixtla SDK. Guides dataset preparation, job submission, status monitoring, model comparison, and accuracy benchmarking. Activates when user needs TimeGPT fine-tuning, custom model training, domain-specific optimization, or zero-shot vs fine-tuned comparison. |
| allowed-tools | "Read,Write,Glob,Grep,Edit" |
| version | "1.0.0" |
| license | MIT |
Nixtla TimeGPT Fine-Tuning Lab
Guide users through production-ready TimeGPT fine-tuning workflows.
Overview
This skill manages TimeGPT fine-tuning:
- Dataset preparation: Validate and format training data
- Job submission: Submit fine-tuning jobs to TimeGPT API
- Status monitoring: Track job progress until completion
- Model comparison: Compare zero-shot vs fine-tuned performance
Prerequisites
Required:
- Python 3.8+
nixtlapackageNIXTLA_API_KEYenvironment variable
Installation:
pip install nixtla pandas utilsforecast
export NIXTLA_API_KEY='your-api-key'
Get API Key: https://dashboard.nixtla.io
Instructions
Step 1: Prepare Dataset
Ensure data is in Nixtla schema:
python {baseDir}/scripts/prepare_finetune_data.py \
--input data/sales.csv \
--output data/finetune_train.csv
Step 2: Configure Fine-Tuning
python {baseDir}/scripts/configure_finetune.py \
--train data/finetune_train.csv \
--model_name "sales-model-v1" \
--horizon 14 \
--freq D
Step 3: Submit Job
python {baseDir}/scripts/submit_finetune.py \
--config forecasting/finetune_config.yml
Step 4: Monitor Progress
python {baseDir}/scripts/monitor_finetune.py \
--job_id <job_id>
Step 5: Compare Models
python {baseDir}/scripts/compare_finetuned.py \
--test data/test.csv \
--finetune_id <model_id>
Output
- forecasting/finetune_config.yml: Fine-tuning configuration
- forecasting/artifacts/finetune_model_id.txt: Saved model ID
- forecasting/results/comparison_metrics.csv: Performance comparison
Error Handling
Error:
NIXTLA_API_KEY not setSolution: Export your API key:export NIXTLA_API_KEY='...'Error:
Insufficient training dataSolution: Need 100+ observations per seriesError:
Fine-tuning job failedSolution: Check data format, ensure no NaN valuesError:
Model ID not foundSolution: Verify job completed, check artifacts directory
Examples
Example 1: Basic Fine-Tuning
# Prepare data
python {baseDir}/scripts/prepare_finetune_data.py \
--input sales.csv --output train.csv
# Submit job
python {baseDir}/scripts/submit_finetune.py \
--train train.csv \
--model_name "my-sales-model" \
--horizon 14
Output:
Fine-tuning job submitted: job_abc123
Model ID saved to: artifacts/finetune_model_id.txt
Example 2: Compare Zero-Shot vs Fine-Tuned
python {baseDir}/scripts/compare_finetuned.py \
--test test.csv \
--finetune_id my-sales-model
Output:
Model Comparison:
TimeGPT Zero-Shot: SMAPE=12.3%
TimeGPT Fine-Tuned: SMAPE=8.7%
Improvement: 29.3%
Resources
- Scripts:
{baseDir}/scripts/ - TimeGPT Docs: https://docs.nixtla.io/
- Fine-Tuning Guide: https://docs.nixtla.io/docs/finetuning
Related Skills:
nixtla-schema-mapper: Prepare data before fine-tuningnixtla-experiment-architect: Create baseline experimentsnixtla-usage-optimizer: Evaluate cost-effectiveness