nixtla-universal-validator
Validate Nixtla skills and plugins with deterministic evidence bundles and strict schema gates. Use when auditing changes or enforcing compliance. Trigger with 'run validation' or 'audit validators'.
When & Why to Use This Skill
The Nixtla Universal Validator is a robust framework designed to ensure the integrity, compliance, and functional correctness of AI agent skills and plugins. By combining a multi-phase subagent workflow with a deterministic runner, it generates comprehensive evidence bundles—including JSON summaries, markdown reports, and execution logs. This tool acts as a strict quality gate, automating schema verification and behavioral testing to maintain high standards in agentic ecosystems.
Use Cases
- CI/CD Integration for Agent Skills: Automatically trigger validation suites during pull requests to ensure new skills or plugins adhere to strict schema gates and structural requirements before deployment.
- Compliance and Security Auditing: Generate deterministic evidence bundles (reports and logs) to provide a transparent audit trail for enterprise-level compliance and regulatory reviews.
- Regression Testing for Plugin Updates: Run behavioral checks and 'ground-truth' log analysis to verify that updates to existing plugins do not introduce functional regressions or break downstream dependencies.
- Automated Quality Assurance Reports: Utilize the multi-phase subagent workflow to reconcile complex validation data into human-readable summaries, providing developers with clear pass/fail results and actionable next steps.
| name | nixtla-universal-validator |
|---|---|
| description | "Validate Nixtla skills and plugins with deterministic evidence bundles and strict schema gates. Use when auditing changes or enforcing compliance. Trigger with 'run validation' or 'audit validators'." |
| allowed-tools | "Read,Write,Bash(python:*),Bash(bash:*),Bash(pytest:*)" |
| version | "1.0.0" |
| author | "Jeremy Longshore <jeremy@intentsolutions.io>" |
| license | MIT |
Nixtla Universal Validator
Purpose
Produce deterministic, reviewable validation evidence (reports + JSON + logs) for a repo, plugin, or skill.
Overview
This skill combines two layers:
- A multi-phase subagent workflow (for human-readable analysis + reconciliation)
- A deterministic validator runner (for ground-truth logs and machine-readable summaries)
Validation runs as a pipeline with deterministic gates:
- Discover what changed and what should be validated
- Validate schemas/structure (skills + plugins) using canonical repo validators
- Run behavioral checks (tests) when requested
- Reconcile results into a single evidence bundle with pass/fail and next actions
This pattern generalizes beyond Nixtla by swapping the check catalog (a list of commands + expected artifacts).
Prerequisites
- Python 3.11+
- Repo validators available:
004-scripts/validate_skills_v2.py004-scripts/validate-all-plugins.sh
- Optional for plugin validation:
jq
Instructions
Step 1: Create a run directory
Use the built-in runner to create a timestamped evidence bundle under reports/<project>/<timestamp>/.
Step 2: Pick a target scope
Choose one:
- Repo root: validate everything
- A plugin folder:
005-plugins/<plugin> - A skill folder:
.claude/skills/<skill>or003-skills/.claude/skills/<skill>
Step 3: Run the deterministic validator suite
python {baseDir}/scripts/run_validator_suite.py \
--target . \
--project nixtla \
--out reports/nixtla
List built-in profiles:
python {baseDir}/scripts/run_validator_suite.py \
--list-profiles \
--target . \
--project nixtla \
--out reports/nixtla
To validate a single plugin:
python {baseDir}/scripts/run_validator_suite.py \
--target 005-plugins/nixtla-baseline-lab \
--project nixtla-baseline-lab \
--out reports/nixtla-baseline-lab
Step 4: (Optional) Include tests
python {baseDir}/scripts/run_validator_suite.py \
--target . \
--project nixtla \
--out reports/nixtla \
--run-tests
Step 4b: (Optional) Run an enterprise profile
python {baseDir}/scripts/run_validator_suite.py \
--target . \
--project nixtla \
--out reports/nixtla \
--profile enterprise \
--fail-on-warn \
--run-tests
Step 5: (Optional) Use the multi-phase subagent workflow
Run phases in order using the prompts in {baseDir}/agents/ and procedures in {baseDir}/references/.
Each phase must write a report file under the run directory and return strict JSON per the phase contract.
Output
Each run creates a timestamped evidence bundle:
reports/<project>/<timestamp>/summary.jsonreports/<project>/<timestamp>/report.mdreports/<project>/<timestamp>/checks/*.log
Error Handling
Error: Validator command not found
Solution: Confirm repo scripts exist and run from the repo root.Error: Plugin validation fails due to
jq
Solution: Installjqor run only skill validation.Error: Tests fail after schema passes
Solution: Treat this as a behavioral regression; fix tests or code, then re-run.
Examples
Common validations:
# Strict schema/structure gates
python 004-scripts/validate_skills_v2.py --fail-on-warn
bash 004-scripts/validate-all-plugins.sh .
Generate an evidence bundle (profile-driven):
# Generate a single evidence bundle for a PR
python {baseDir}/scripts/run_validator_suite.py \
--target . \
--project pr-1234 \
--out reports/pr-1234 \
--run-tests
Resources
- Subagent orchestration pattern:
000-docs/000a-planned-skills/templates/verification-pipeline/README.md - Canonical skills validator:
004-scripts/validate_skills_v2.py - Canonical plugin validator:
004-scripts/validate-all-plugins.sh - Subagent prompts:
{baseDir}/agents/ - Phase procedures:
{baseDir}/references/