ralph

MrTrigger's avatarfrom MrTrigger

Set up tasks for autonomous feature development. Use when starting a new feature that will be built task-by-task. Triggers on: ralph, set up ralph, start ralph, new ralph feature, ralph setup, plan feature. Chats through the feature idea, creates tasks with dependencies in progress.txt.

0stars🔀0forks📁View on GitHub🕐Updated Jan 10, 2026

When & Why to Use This Skill

Ralph is an interactive feature planning and task orchestration skill designed for autonomous software development. It bridges the gap between high-level requirements and execution by breaking down complex features into granular, dependency-aware tasks. By enforcing a 'one-iteration' task size and maintaining a structured progress log, it ensures that AI agents can build features systematically without exceeding context limits or losing track of project state.

Use Cases

  • Converting PRDs to Roadmaps: Automatically parse Product Requirement Documents into a sequence of executable technical tasks with defined acceptance criteria.
  • Interactive Feature Scoping: Use a conversational interface to clarify requirements for new features, helping developers define scope, tech stacks, and implementation patterns before coding starts.
  • Dependency Management: Organize complex development workflows by mapping out which tasks (e.g., database migrations) must be completed before others (e.g., UI components) can begin.
  • Autonomous Workflow Handoff: Generate and maintain a progress.txt file that serves as a source of truth for sub-agents, allowing for seamless handoffs and state management in long-running builds.
  • Project Auditing and Resumption: Analyze existing task lists to identify blocked tasks, verify structural integrity, and determine the immediate next steps for a development loop.
nameralph
description"Set up tasks for autonomous feature development. Use when starting a new feature that will be built task-by-task. Triggers on: ralph, set up ralph, start ralph, new ralph feature, ralph setup, plan feature. Chats through the feature idea, creates tasks with dependencies in progress.txt."

Ralph Feature Setup

Interactive feature planning that creates task lists with dependencies for autonomous execution.


The Job

Three modes:

Mode 1: From PRD

  1. Read the PRD file (user stories become tasks)
  2. Convert user stories to executable tasks with dependencies
  3. Create progress.txt
  4. Hand off to build-feature skill for execution

Mode 2: New Feature (no PRD)

  1. Chat through the feature - Ask clarifying questions
  2. Break into small tasks - Each completable in one iteration
  3. Create progress.txt - With tasks and dependencies
  4. Hand off to build-feature skill for execution

Mode 3: Existing Tasks

  1. Load existing progress.txt
  2. Verify structure - Check tasks have proper dependencies
  3. Show status - Which tasks are ready, completed, blocked
  4. Hand off to build-feature skill for execution

Ask the user which mode they need:

Are you:
1. Using a PRD (give me the path, e.g., /tasks/prd-feature-name.md)
2. Starting fresh (I'll ask questions to plan the feature)
3. Continuing existing tasks (I'll load progress.txt)

Mode 1: From PRD

If the user provides a PRD path (or is coming from the prd skill):

Read and parse the PRD:

cat /tasks/prd-[feature-name].md

Convert user stories to tasks:

Each User Story in the PRD becomes one or more tasks:

  • US-001 → Task(s) for that story
  • The acceptance criteria become the task's Acceptance: field
  • Order tasks by dependency (schema → backend → UI → tests)

Example conversion:

PRD User Story:
### US-002: Display priority indicator on task cards
**Acceptance Criteria:**
- [ ] Each task card shows colored priority badge
- [ ] npm run typecheck passes
- [ ] Verify in browser

Becomes task:
- [ ] Display priority indicator on task cards
  Description: Add colored priority badge to task cards (red=high, yellow=medium, gray=low)
  Files: src/components/TaskCard.tsx
  Depends: Add priority field to database
  Acceptance: npm run typecheck passes, verify badge renders in browser

Skip the clarifying questions

The PRD already has the answers. Go directly to creating progress.txt with the converted tasks.


Mode 2: New Feature (No PRD)

If no PRD exists, gather requirements through conversation.

Step 1: Understand the Feature

Start by asking the user about their feature. Don't assume - ASK:

What feature are you building?

Then ask clarifying questions:

  • What's the user-facing goal?
  • What parts of the codebase will this touch? (database, UI, API, etc.)
  • Are there any existing patterns to follow?
  • What should it look like when done?

Keep asking until you have enough detail to break it into tasks.


Step 2: Break Into Tasks

Each task must be completable in ONE iteration (~one context window).

Each iteration spawns a fresh subagent with no memory of previous work. If a task is too big, the agent runs out of context before finishing.

Right-sized tasks:

  • Add a database column + migration
  • Create a single UI component
  • Implement one server action
  • Add a filter to an existing list
  • Write tests for one module

Too big (split these):

  • "Build the entire dashboard" → Split into: schema, queries, components, filters
  • "Add authentication" → Split into: schema, middleware, login UI, session handling
  • "Refactor the API" → Split into one task per endpoint

Rule of thumb: If you can't describe the change in 2-3 sentences, it's too big.


Step 3: Order by Dependencies

Tasks execute based on Depends: field. Earlier tasks must complete before dependent ones start.

Typical order:

  1. Schema/database changes (migrations)
  2. Server actions / backend logic
  3. UI components that use the backend
  4. Integration / E2E tests

Express dependencies like this:

Task 1: Schema (Depends: none)
Task 2: Server action (Depends: Schema task)
Task 3: UI component (Depends: Server action task)
Task 4: Tests (Depends: UI component task)

Parallel tasks that don't depend on each other can share the same dependency.


Step 4: Create progress.txt

Create the file at scripts/build-feature-loop/progress.txt:

# Build Progress Log
Started: [today's date]
Feature: [feature name]

## Codebase Patterns
(Patterns discovered during this feature build)

---

## Tasks

- [ ] Task 1 title
  Description: Detailed description of what to do
  Files: path/to/file1.ts, path/to/file2.ts
  Depends: none
  Acceptance: npm run typecheck passes

- [ ] Task 2 title
  Description: What to implement
  Files: src/components/Thing.tsx
  Depends: Task 1 title
  Acceptance: npm run typecheck passes, component renders

- [ ] Task 3 title
  Description: What to do
  Files: src/api/endpoint.ts
  Depends: Task 1 title
  Acceptance: npm run typecheck passes, npm test passes

- [ ] Task 4 title
  Description: What to implement
  Files: src/tests/feature.test.ts
  Depends: Task 2 title, Task 3 title
  Acceptance: npm test passes

---

## Completed Work
(Entries added as tasks complete)

Task format:

- [ ] Short descriptive title
  Description: Detailed description with what to implement
  Files: comma-separated list of files to create or modify
  Depends: none | Task title | Task A, Task B (comma-separated)
  Acceptance: Specific verifiable criteria

Dependency rules:

  • Depends: none - Can start immediately
  • Depends: Task A - Blocked until Task A is [x]
  • Depends: Task A, Task B - Blocked until ALL listed tasks are [x]

Step 5: Archive Previous Progress (if needed)

Before creating new progress.txt, check if one exists:

cat scripts/build-feature-loop/progress.txt

Archive if:

  • It has completed tasks from a previous feature
  • The previous feature is different from the current one

Archive command:

DATE=$(date +%Y-%m-%d)
FEATURE="previous-feature-name"
mkdir -p scripts/build-feature-loop/archive/$DATE-$FEATURE
mv scripts/build-feature-loop/progress.txt scripts/build-feature-loop/archive/$DATE-$FEATURE/

Preserve useful Codebase Patterns - Copy any relevant patterns to the new progress.txt.


Step 6: Confirm Setup & Sync TodoWrite

After creating progress.txt:

  1. Update TodoWrite to show the user the task list
  2. Show summary:
Feature ready for autonomous execution!

**Feature:** [name]

**Tasks:**
1. [ ] Task 1 - no dependencies (READY)
2. [ ] Task 2 - depends on #1
3. [ ] Task 3 - depends on #1
4. [ ] Task 4 - depends on #2, #3
...

**Ready to start:** [count] tasks
**Blocked:** [count] tasks (waiting on dependencies)

**To run:** Invoke the build-feature skill to start the execution loop.

Mode 2: Continuing Existing Tasks

If the user has an existing progress.txt:

Load and parse:

cat scripts/build-feature-loop/progress.txt

Show status:

Count tasks by status:

  • [ ] with Depends: none or all dependencies [x] → Ready
  • [ ] with unmet dependencies → Blocked
  • [x] → Completed
**Current Status:**
- Completed: 3 tasks
- Ready to work: 2 tasks
- Blocked: 5 tasks (waiting on dependencies)

**Next task to execute:**
[Task title] - [brief description]

**To continue:** Invoke the build-feature skill.

Fix issues if needed:

If tasks are missing dependencies or have circular dependencies, offer to fix:

These tasks don't have dependencies set. Should I:
1. Add dependencies based on logical order?
2. Leave them parallel (any ready task can be picked)?

Task Description Guidelines

Write descriptions that a future iteration can pick up without context:

- [ ] Create expense upsert tool skeleton
  Description: Create upsert-expense.ts with Zod schema matching upsert-income.ts pattern. Include action enum (create/update/delete/list), expense fields (description, amount, category, date).
  Files: workflows/tools/upsert-expense.ts
  Depends: none
  Acceptance: npm run typecheck passes, file exports tool definition

Good acceptance criteria (verifiable):

  • "Add investorType column to investor table with default 'cold'"
  • "Filter dropdown has options: All, Cold, Friend"
  • "npm run typecheck passes"
  • "npm test -- path/to/test.ts passes"

Bad acceptance criteria (vague):

  • "Works correctly"
  • "Good UX"
  • "Handles edge cases"

Always include:

npm run typecheck passes

For tasks with testable logic:

npm test passes

For UI/browser tasks:

For functional testing (checking behavior):

Use browser_snapshot to verify page content

For visual testing (checking appearance):

Use browser_take_screenshot to verify visual appearance

Example Conversation

User: set up ralph for adding expense tracking to the legal agent

Assistant: What feature are you building?

User: I want Grace (our legal chat agent) to be able to create, update, and delete expenses when users mention them in conversation.

Assistant: Got it - an expense upsert tool for the legal agent. A few questions:

  1. Are there existing tools in the legal agent I should follow as a pattern?
  2. Do expenses have different types (family vs child)?
  3. Should it also list expenses or just CRUD?
  4. Any task completion logic needed?

User: Follow upsert-income.ts. Yes family and child expenses (different tables). Yes list too. Mark APP_EXPENSES_ENTERED task complete.

Assistant: Perfect. Here's my proposed breakdown:

  1. Create tool skeleton - Zod schema, basic structure (no deps)
  2. Category mapping - Map names to category IDs (depends: #1)
  3. Amount parsing - Handle currency formats (depends: #1)
  4. Duplicate detection - Check before creating (depends: #1)
  5. Create family expenses - Insert into expense table (depends: #2, #3, #4)
  6. Create child expenses - Insert into child_expense (depends: #5)
  7. Update action - Modify existing expenses (depends: #6)
  8. Delete action - Remove expenses (depends: #6)
  9. List action - Show all expenses with totals (depends: #6)
  10. Task completion - Mark APP_EXPENSES_ENTERED done (depends: #9)
  11. Register tool - Add to legal chat workflow (depends: #10)
  12. Unit tests - Test helper functions (depends: #11)
  13. E2E tests - Test via chat interface (depends: #12)

Does this look right? Any tasks to add or remove?

User: Looks good, create the tasks

Assistant: [Creates progress.txt with all tasks, updates TodoWrite]

Feature ready for autonomous execution!

Feature: Legal Agent Expense Upsert Tool

Tasks: 13 tasks created Ready to start: 1 task (Create tool skeleton) Blocked: 12 tasks (waiting on dependencies)

To run: Invoke the build-feature skill to start the loop.


Checklist Before Creating Tasks

  • Chatted through feature to understand scope
  • Each task completable in one iteration (small enough)
  • Tasks ordered by dependency (schema → backend → UI → tests)
  • Every task has "npm run typecheck passes" in acceptance criteria
  • UI tasks have browser verification in acceptance criteria
  • Descriptions have enough detail to implement without context
  • Previous progress.txt archived if it had content
  • New progress.txt created with all tasks
  • TodoWrite synced with task list
ralph – AI Agent Skills | Claude Skills