-
Notifications
You must be signed in to change notification settings - Fork 1
♻️ Refactor all prompts for LLM-to-LLM communication best practices #4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Apply prompt-engineering.mdc principles across all commands, agents, skills, and workflow rules. Changes: - Replace numbered steps with semantic XML tags - Remove excessive markdown formatting (bold, italics, decorative symbols) - Convert prescriptive instructions to goal-focused outcomes - Trust executing model's intelligence - remove micro-management - Front-load critical information in all prompts - Consolidate repetitive content while preserving functionality - Remove anti-patterns and "bad example" code blocks Impact: - 18 files refactored - ~1,476 lines removed (net reduction of ~40%) - Improved clarity and LLM parseability - Reduced token usage while maintaining all functionality Files modified: - Commands: autotask, setup-environment, create-prompt, troubleshoot, handoff-context, personality-change, ai-coding-config, product-intel, generate-AGENTS-file (9) - Agents: design-reviewer, seo-specialist, site-keeper (3) - Skills: brainstorming, research, skill-creator, systematic-debugging, youtube-transcript-analyzer (5) - Rules: git-worktree-task.mdc (1) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Pull Request Review: LLM-to-LLM Communication RefactoringThank you for this comprehensive refactoring! This is excellent work that systematically applies prompt engineering best practices across the entire codebase. The transformation from verbose, prescriptive instructions to goal-focused, XML-structured prompts demonstrates deep understanding of how LLMs parse and execute instructions. Overall AssessmentVerdict: Approved with minor suggestions This PR delivers exactly what it promises: cleaner, more parseable prompts that trust the executing model's intelligence while reducing token usage by ~40%. The refactoring maintains all functionality while making prompts more effective for LLM-to-LLM communication. Strengths1. Consistent Application of Principles ✨The refactoring consistently applies all key principles from
2. Semantic XML Tags Over NumbersThe transition from numbered sections to semantic tags is particularly well-executed: Before (from site-keeper.md): ## Step 0: Discovery & Inventory (ALWAYS FIRST)
## What We Monitor (After Discovery)
## Communication ChannelsAfter: <objective>...</objective>
<service-detection>...</service-detection>
<error-intelligence>...</error-intelligence>
<communication>...</communication>This makes prompts more maintainable - you can reorder sections without renumbering everything, and the tags clearly communicate intent. 3. Significant Token ReductionThe 40% reduction in total lines (892 additions, 2369 deletions) is impressive:
These reductions come from removing redundancy and verbose explanations, not from cutting functionality. Every prompt retains its original purpose. 4. Clear Structure Aids ComprehensionThe XML structure creates clear boundaries that help LLMs parse complex prompts. For example, in <task-preparation>...</task-preparation>
<worktree-setup>...</worktree-setup>
<autonomous-execution>...</autonomous-execution>
<obstacle-and-decision-handling>...</obstacle-and-decision-handling>
<validation-and-review>...</validation-and-review>
<create-pr>...</create-pr>
<bot-feedback-loop>...</bot-feedback-loop>This makes it obvious which phase the LLM is in and what comes next. 5. Trust the Executing ModelThe refactoring consistently applies the principle "trust the executing model's superior capabilities." For instance, in Before: Detailed step-by-step instructions for every scenario This is the right approach - the model executing these prompts (Claude 3.5 Sonnet or newer) is sophisticated enough to figure out implementation details. Areas for Improvement1. Inconsistent XML Tag Naming (Minor)While the refactoring uses semantic tags, there's some inconsistency in naming conventions:
Suggestion: Establish consistent naming conventions:
2. Some Prompts Still Have Prescriptive Language (Minor)A few prompts retain step-by-step language that could be more goal-focused: Example from autotask.md: <task-preparation>
Ensure task clarity before implementation. If the task description is unclear or ambiguous, use /create-prompt to ask clarifying questions and create a structured prompt. If the task is clear and unambiguous, proceed directly to implementation.
</task-preparation>This is an if-then-else prescription. More goal-focused version: <task-preparation>
Ensure complete understanding of requirements before implementation. Clarify ambiguous specifications to prevent wasted work.
</task-preparation>The executing model knows how to clarify requirements - it doesn't need the conditional logic spelled out. 3. Documentation of Breaking Changes (Missing)The PR description mentions this is a refactoring that preserves all functionality, but doesn't explicitly confirm:
Suggestion: Add a "Compatibility" section to the PR description confirming:
4. Pattern Consistency in Examples (Opportunity)The Example: In files that demonstrate workflows, consider showing 2-3 similar examples of the pattern rather than one example. This helps the executing model recognize the pattern structure. Security ConsiderationsNo security concerns identified. The refactoring:
Performance ConsiderationsPositive impact expected:
Test CoverageNote: This PR doesn't include automated tests for the refactored prompts. This is understandable - testing LLM prompt effectiveness is challenging. However, consider: Suggestion for follow-up work:
This could be a separate issue rather than blocking this PR. Code Quality & Best PracticesExcellent adherence to
|
Summary
Comprehensive refactoring of all prompts (commands, agents, skills, workflow rules) to follow LLM-to-LLM communication best practices from
@rules/prompt-engineering.mdc.What Changed
Structural Improvements
<objective>,<workflow>,<guidelines>, etc.)Content Optimization
Impact
Files Modified
Commands (9 files)
Agents (3 files)
Skills (5 files)
Rules (1 file)
Design Decisions
Preserved all functionality: Every refactored file maintains its original intent and capabilities. No features were removed, only presentation was improved.
Pattern-based examples: Kept good examples as patterns for LLMs to follow, removed "bad example" anti-patterns that LLMs might encode.
XML structure: Used semantic tag names that describe content purpose (e.g.,
<objective>,<workflow>,<critical-success-factors>) instead of numbered or generic tags.Goal-focused language: Shifted from "do this, then do that" to "achieve this outcome" to trust the executing model's superior capabilities.
Testing
All refactored files have been reviewed to ensure:
🤖 Generated with Claude Code
Note
Refactors all AI prompt files (agents, commands, skills, rules) to a concise, XML-tagged, goal-oriented structure that reduces verbosity and removes anti-patterns while preserving functionality.
objective,workflow,quality-standards).autotask,setup-environment,troubleshoot,handoff-context,ai-coding-config,product-intel,generate-AGENTS-file,create-prompt,personality-change).design-reviewer,seo-specialist,site-keeper) and skills (brainstorming, research, skill-creator, systematic-debugging, youtube-transcript-analyzer)..cursor/rules/git-worktree-task.mdc).Written by Cursor Bugbot for commit 80ef61e. This will update automatically on new commits. Configure here.