agents-consult
Consult multiple LLM providers via src.agents.scripts.consult to get approach suggestions before implementing changes. Use when you need design options, risk checks, or alternative solutions for a task in this repo, especially before complex work.
When & Why to Use This Skill
The Agents Consult skill is a powerful pre-implementation tool that aggregates design suggestions and risk assessments from multiple LLM providers. By facilitating multi-model consultation, it helps developers validate technical approaches, identify potential pitfalls, and explore alternative architectural patterns before committing to code changes, significantly reducing technical debt and implementation errors.
Use Cases
- Complex Feature Planning: Use the skill to gather diverse architectural perspectives and design options before starting work on intricate system components.
- Risk Mitigation: Perform automated risk checks by asking multiple models to identify potential edge cases or security vulnerabilities in a proposed implementation strategy.
- Refactoring Strategy: Consult various AI providers to determine the most efficient and maintainable way to restructure legacy code or improve existing datasets.
- Consensus-Based Decision Making: Compare suggestions from different models (like Claude and Codex) to find the most robust solution for a specific technical challenge.
| name | agents-consult |
|---|---|
| description | Consult multiple LLM providers via src.agents.scripts.consult to get approach suggestions before implementing changes. Use when you need design options, risk checks, or alternative solutions for a task in this repo, especially before complex work. |
Agents Consult
Overview
Use this skill to ask multiple LLM providers for implementation approaches before you start coding.
Quick start
Run the consult script with a clear prompt.
uv run python -m src.agents.scripts.consult 'task.prompt=Propose improvements for BLCS generate_dataset.'
Common options
Use the approach-oriented system prompt when you want structured design guidance.
uv run python -m src.agents.scripts.consult system_prompt=approach 'task.prompt=...'
Disable a provider to avoid specific models.
uv run python -m src.agents.scripts.consult claude.enable=false 'task.prompt=...'
Output
Expect a multi-provider summary with sections like:
Sub-agent Consultation Results
[CLAUDE] ...
[CODEX] ...