programmatic-eda

nimrodfisher's avatarfrom nimrodfisher

Systematic exploratory data analysis following best practices. Use when analyzing any dataset to understand structure, identify data quality issues (duplicates, missing values, inconsistencies, outliers), examine distributions, detect correlations, and generate visualizations. Provides comprehensive data profiling with sanity checks before analysis.

0stars🔀0forks📁View on GitHub🕐Updated Jan 11, 2026

When & Why to Use This Skill

This Claude skill provides a systematic framework for Programmatic Exploratory Data Analysis (EDA). It automates the detection of critical data quality issues—including missing values, outliers, and duplicates—while providing deep insights through distribution analysis and correlation mapping. By following industry best practices, it ensures data integrity and readiness for advanced modeling, reporting, or business intelligence tasks.

Use Cases

  • Initial Data Assessment: Quickly understanding the structure, types, and health of a newly acquired dataset before starting a project.
  • Data Quality Auditing: Identifying inconsistencies and anomalies in business records to prevent 'garbage in, garbage out' scenarios in analytics.
  • Statistical Exploration: Examining variable distributions and detecting correlations to uncover hidden patterns and relationships within the data.
  • Pre-modeling Preparation: Performing automated sanity checks and data profiling to define cleaning requirements for machine learning pipelines.
nameprogrammatic-eda
descriptionSystematic exploratory data analysis following best practices. Use when analyzing any dataset to understand structure, identify data quality issues (duplicates, missing values, inconsistencies, outliers), examine distributions, detect correlations, and generate visualizations. Provides comprehensive data profiling with sanity checks before analysis.

Programmatic EDA

Quick Start

Execute systematic data quality checks, distribution analysis, and correlation detection on any dataset with automated sanity checks.

Context Requirements

Before starting EDA, Claude needs:

  1. Dataset Access: The data file or database connection
  2. Business Context: What this data represents and what decisions it informs
  3. Quality Thresholds (optional): What % missing/outliers are acceptable

Context Gathering

If dataset not yet loaded:

"Please provide your dataset. I can work with:

  • CSV/Excel files (upload or provide path)
  • Database connection details
  • Pandas DataFrame (if already loaded in notebook)"

If business context missing:

"To provide relevant insights, I need to understand:

  1. What does this dataset represent? (customers, transactions, events, etc.)
  2. What business question are you trying to answer?
  3. What time period does this cover?
  4. Are there any known data quality issues I should be aware of?"

For quality thresholds (if not provided, use defaults):

"I'll use standard thresholds unless you specify otherwise:

  • Missing values: Flag if >5% (warn if >30%)
  • Outliers: Flag using IQR method (1.5 × IQR)
  • Duplicates: Flag if >1%

Do these work for your use case, or should I adjust?"

Workflow

1. Data Loading & Overview