prompt-caching
Provider-native prompt caching for Claude and OpenAI. Use when optimizing LLM costs with cache breakpoints, caching system prompts, or reducing token costs for repeated prefixes.
No detailed documentation available for this skill yet.
Visit the GitHub repository for more information.