advanced-tool-usage
Guidelines for multi-stage tool orchestration and handling large data using 'redirect_tool_call'. Use this when you need to process large amounts of data without exhausting the context window or when building complex data pipelines.
When & Why to Use This Skill
This Claude skill optimizes multi-stage tool orchestration and large-scale data handling through advanced 'redirect_tool_call' strategies. It empowers AI agents to build complex data pipelines and process voluminous datasets efficiently by offloading raw data to external storage, preventing context window exhaustion and ensuring high-performance workspace management.
Use Cases
- Complex Data Pipelining: Chaining multiple tools where the output of one tool (e.g., a web search) serves as the direct input for another (e.g., a Python script) via file-based redirection.
- Large-Scale Log Analysis: Processing massive system logs or datasets by redirecting output to a file and using search utilities like 'grep' or 'rg' to extract only relevant insights into the conversation.
- Efficient Workspace Management: Creating dedicated scratch directories for multi-stage tasks to keep the environment clean and organized during intermediate processing steps.
- High-Volume Data Exporting: Generating and saving large JSON, CSV, or text reports (e.g., >5MB) directly to the workspace for user download instead of flooding the chat interface with raw text.
- Context Window Optimization: Reducing token consumption by only reading refined subsets of data into the active chat history after external processing.
| name | advanced-tool-usage |
|---|---|
| description | Guidelines for multi-stage tool orchestration and handling large data using 'redirect_tool_call'. Use this when you need to process large amounts of data without exhausting the context window or when building complex data pipelines. |
Advanced Tool Usage
Core Principles
- Context Economy: Never bring raw, voluminous data into the conversation if you only need a refined subset.
- Pipeline Thinking: View tools as modular blocks that can pass data through files.
- Offloading: Use
redirect_tool_callto "capture" output into external storage.
Patterns
1. The Pipelining Pattern
When a tool's output is the input for another tool:
- Redirect: Call the first tool using
redirect_tool_call. - Process: Call the second tool (e.g.,
python_executeorshell_execute) and pass the file path created in step 1 as an argument. - Refine: Read only the final processed result into the conversation.
2. The Context Buffer Pattern
When working with large files or long logs:
- Redirect the reading tool (e.g.,
cat,tavily_search) to a temporary file. - Use
rgorgrepto extract only the relevant lines from that file.
3. Workspace Management for Pipelines
When building multi-stage pipelines that generate multiple files:
- Use
shell_executewithmktemp -dto create a dedicated scratch directory. - Direct all intermediate
redirect_tool_calloutputs into that directory to keep the workspace clean. - Example:
redirect_tool_call(..., output_file="/tmp/tmp.X/step1.json")
4. The Large Data Export
When the user requests a result that is too large for markdown (e.g., a 5MB JSON dump):
- Use
redirect_tool_callwith a specificoutput_filename. - Inform the user of the file location instead of printing the content.
When to use redirect_tool_call
- The expected output is > 50 lines and the tool does NOT support its own redirection (e.g., searches, API calls).
- The output is raw data (JSON, CSV) that needs further processing by another tool.
- You are chaining an MCP tool into a local processing tool.
Note: For shell_execute or python_execute, always use internal file writing (> or file.write()) instead of redirect_tool_call for maximum efficiency.