web-load2

workromancer's avatarfrom workromancer

Loads and extracts all content from web URLs. Use when the user asks to fetch, load, or retrieve content from a website or URL.

0stars🔀0forks📁View on GitHub🕐Updated Jan 4, 2026

When & Why to Use This Skill

The Web Content Loader (web-load2) is a specialized Claude skill designed to fetch, extract, and process comprehensive text content from web URLs. By utilizing the WebFetch tool, it enables the agent to bridge the gap between static knowledge and the live web, providing seamless retrieval of online articles, documentation, and data. It features automated HTTPS upgrading, redirect handling, and the ability to save extracted content directly to local files, making it an essential utility for data-driven workflows.

Use Cases

  • In-depth Topic Research: Automatically load and extract text from multiple source URLs to provide comprehensive summaries or fact-checking against live web data.
  • Documentation Harvesting: Fetch technical documentation or API references from the web and save them as local files for persistent access and coding assistance.
  • Content Archiving: Convert web-based articles or blog posts into structured text or Markdown files for personal knowledge management and offline reading.
  • Batch URL Processing: Efficiently retrieve information from a list of multiple websites in parallel to gather competitive intelligence or market data.
nameweb-load2
descriptionLoads and extracts all content from web URLs. Use when the user asks to fetch, load, or retrieve content from a website or URL.

Web Content Loader

This skill helps you load and extract all content from web URLs efficiently.

Instructions

When loading content from a URL:

  1. Use WebFetch to fetch the content from the provided URL

    • The URL must be fully-formed and valid
    • Provide a clear prompt describing what information to extract
    • HTTP URLs will be automatically upgraded to HTTPS
  2. Extract all content by using an appropriate prompt like:

    • "Extract all text content from this page"
    • "Get the complete page content including all sections"
    • "Retrieve all information from this webpage"
  3. Handle redirects properly:

    • If WebFetch returns a redirect message, make a new request with the redirect URL
  4. Save content if requested:

    • If the user wants to save the content, use the Write tool to save to a file
    • Suggest appropriate filenames based on the URL or content

Examples

Basic URL fetch:

User: Load content from https://example.com
Action: Use WebFetch with prompt "Extract all text content from this page"

Fetch and save:

User: Load https://docs.example.com/guide and save it
Action: 
1. Use WebFetch to get content
2. Use Write to save to a markdown file

Multiple URLs:

User: Load content from these URLs: url1, url2, url3
Action: Use WebFetch for each URL in parallel when possible

Best Practices

  • Always validate that URLs are well-formed before fetching
  • Use descriptive prompts when calling WebFetch to get complete content
  • For large amounts of content, consider saving to files
  • Process multiple URLs in parallel when they're independent
web-load2 – AI Agent Skills | Claude Skills