# Tech-Spec Workflow - Context-Aware Technical Planning (quick-flow) The workflow execution engine is governed by: {project-root}/.bmad/core/tasks/workflow.xml You MUST have already loaded and processed: {installed_path}/workflow.yaml Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level} Generate all documents in {document_output_language} This is quick-flow efforts - tech-spec with context-rich story generation Quick Flow: tech-spec + epic with 1-5 stories (always generates epic structure) LIVING DOCUMENT: Write to tech-spec.md continuously as you discover - never wait until the end CONTEXT IS KING: Gather ALL available context before generating specs DOCUMENT OUTPUT: Technical, precise, definitive. Specific versions only. User skill level ({user_skill_level}) affects conversation style ONLY, not document content. Input documents specified in workflow.yaml input_file_patterns - workflow engine handles fuzzy matching, whole vs sharded document discovery automatically ⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever. ⚠️ CHECKPOINT PROTOCOL: After EVERY tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints. Check if {output_folder}/bmm-workflow-status.yaml exists No workflow status file found. Tech-spec workflow can run standalone or as part of BMM workflow path. **Recommended:** Run `workflow-init` first for project context tracking and workflow sequencing. **Quick Start:** Continue in standalone mode - perfect for rapid prototyping and quick changes! Continue in standalone mode or exit to run workflow-init? (continue/exit) Set standalone_mode = true Great! Let's quickly configure your project... How many user stories do you think this work requires? **Single Story** - Simple change (bug fix, small isolated feature, single file change) → Generates: tech-spec + epic (minimal) + 1 story → Example: "Fix login validation bug" or "Add email field to user form" **Multiple Stories (2-5)** - Coherent feature (multiple related changes, small feature set) → Generates: tech-spec + epic (detailed) + 2-5 stories → Example: "Add OAuth integration" or "Build user profile page" Enter **1** for single story, or **2-5** for number of stories you estimate Capture user response as story_count (1-5) Validate: If not 1-5, ask for clarification. If > 5, suggest using full BMad Method instead Is this a **greenfield** (new/empty codebase) or **brownfield** (existing codebase) project? **Greenfield** - Starting fresh, no existing code aside from starter templates **Brownfield** - Adding to or modifying existing functional code or project Enter **greenfield** or **brownfield**: Capture user response as field_type (greenfield or brownfield) Validate: If not greenfield or brownfield, ask again Perfect! Running as: - **Story Count:** {{story_count}} {{#if story_count == 1}}story (minimal epic){{else}}stories (detailed epic){{/if}} - **Field Type:** {{field_type}} - **Mode:** Standalone (no status file tracking) Let's build your tech-spec! Exit workflow Load the FULL file: {workflow-status} Parse workflow_status section Check status of "tech-spec" workflow Get selected_track from YAML metadata indicating this is quick-flow-greenfield or quick-flow-brownfield Get field_type from YAML metadata (greenfield or brownfield) Find first non-completed workflow (next expected workflow) **Incorrect Workflow for Level {{selected_track}}** Tech-spec is for Simple projects. **Correct workflow:** `create-prd` (PM agent). You should Exit at this point, unless you want to force run this workflow. ⚠️ Tech-spec already completed: {{tech-spec status}} Re-running will overwrite the existing tech-spec. Continue? (y/n) Exiting. Use workflow-status to see your next step. Exit workflow ⚠️ Next expected workflow: {{next_workflow}}. Tech-spec is out of sequence. Continue with tech-spec anyway? (y/n) Exiting. Run {{next_workflow}} instead. Exit workflow Set standalone_mode = false After discovery, these content variables are available: {product_brief_content}, {research_content}, {document_project_content} Welcome {user_name} warmly and explain what we're about to do: "I'm going to gather all available context about your project before we dive into the technical spec. The following content has been auto-loaded: - Product briefs and research: {product_brief_content}, {research_content} - Brownfield codebase documentation: {document_project_content} (loaded via INDEX_GUIDED strategy) - Your project's tech stack and dependencies - Existing code patterns and structure This ensures the tech-spec is grounded in reality and gives developers everything they need." **PHASE 1: Load Existing Documents** Search for and load (using dual-strategy: whole first, then sharded): 1. **Product Brief:** - Search pattern: {output*folder}/\_brief*.md - Sharded: {output*folder}/\_brief*/index.md - If found: Load completely and extract key context 2. **Research Documents:** - Search pattern: {output*folder}/\_research*.md - Sharded: {output*folder}/\_research*/index.md - If found: Load completely and extract insights 3. **Document-Project Output (CRITICAL for brownfield):** - Always check: {output_folder}/index.md - If found: This is the brownfield codebase map - load ALL shards! - Extract: File structure, key modules, existing patterns, naming conventions Create a summary of what was found and ask user if there are other documents or information to consider before proceeding: - List of loaded documents - Key insights from each - Brownfield vs greenfield determination **PHASE 2: Intelligently Detect Project Stack** Use your comprehensive knowledge as a coding-capable LLM to analyze the project: **Discover Setup Files:** - Search {project-root} for dependency manifests (package.json, requirements.txt, Gemfile, go.mod, Cargo.toml, composer.json, pom.xml, build.gradle, pyproject.toml, etc.) - Adapt to ANY project type - you know the ecosystem conventions **Extract Critical Information:** 1. Framework name and EXACT version (e.g., "React 18.2.0", "Django 4.2.1") 2. All production dependencies with specific versions 3. Dev tools and testing frameworks (Jest, pytest, ESLint, etc.) 4. Available build/test scripts 5. Project type (web app, API, CLI, library, etc.) **Assess Currency:** - Identify if major dependencies are outdated (>2 years old) - Use WebSearch to find current recommended versions if needed - Note migration complexity in your summary **For Greenfield Projects:** Use WebSearch to discover current best practices and official starter templates Recommend appropriate starters based on detected framework (or user's intended stack) Present benefits conversationally: setup time saved, modern patterns, testing included Would you like to use a starter template? (yes/no/show-me-options) Capture preference and include in implementation stack if accepted **Trust Your Intelligence:** You understand project ecosystems deeply. Adapt your analysis to any stack - don't be constrained by examples. Extract what matters for developers. Store comprehensive findings as {{project_stack_summary}} **PHASE 3: Brownfield Codebase Reconnaissance** (if applicable) Analyze the existing project structure: 1. **Directory Structure:** - Identify main code directories (src/, lib/, app/, components/, services/) - Note organization patterns (feature-based, layer-based, domain-driven) - Identify test directories and patterns 2. **Code Patterns:** - Look for dominant patterns (class-based, functional, MVC, microservices) - Identify naming conventions (camelCase, snake_case, PascalCase) - Note file organization patterns 3. **Key Modules/Services:** - Identify major modules or services already in place - Note entry points (main.js, app.py, index.ts) - Document important utilities or shared code 4. **Testing Patterns & Standards (CRITICAL):** - Identify test framework in use (from package.json/requirements.txt) - Note test file naming patterns (.test.js, test.py, .spec.ts, Test.java) - Document test organization (tests/, **tests**, spec/, test/) - Look for test configuration files (jest.config.js, pytest.ini, .rspec) - Check for coverage requirements (in CI config, test scripts) - Identify mocking/stubbing libraries (jest.mock, unittest.mock, sinon) - Note assertion styles (expect, assert, should) 5. **Code Style & Conventions (MUST CONFORM):** - Check for linter config (.eslintrc, .pylintrc, rubocop.yml) - Check for formatter config (.prettierrc, .black, .editorconfig) - Identify code style: - Semicolons: yes/no (JavaScript/TypeScript) - Quotes: single/double - Indentation: spaces/tabs, size - Line length limits - Import/export patterns (named vs default, organization) - Error handling patterns (try/catch, Result types, error classes) - Logging patterns (console, winston, logging module, specific formats) - Documentation style (JSDoc, docstrings, YARD, JavaDoc) Store this as {{existing_structure_summary}} **CRITICAL: Confirm Conventions with User** I've detected these conventions in your codebase: **Code Style:** {{detected_code_style}} **Test Patterns:** {{detected_test_patterns}} **File Organization:** {{detected_file_organization}} Should I follow these existing conventions for the new code? Enter **yes** to conform to existing patterns, or **no** if you want to establish new standards: Capture user response as conform_to_conventions (yes/no) What conventions would you like to use instead? (Or should I suggest modern best practices?) Capture new conventions or use WebSearch for current best practices Store confirmed conventions as {{existing_conventions}} Note: Greenfield project - no existing code to analyze Set {{existing_structure_summary}} = "Greenfield project - new codebase" **PHASE 4: Synthesize Context Summary** Create {{loaded_documents_summary}} that includes: - Documents found and loaded - Brownfield vs greenfield status - Tech stack detected (or "To be determined" if greenfield) - Existing patterns identified (or "None - greenfield" if applicable) Present this summary to {user_name} conversationally: "Here's what I found about your project: **Documents Available:** [List what was found] **Project Type:** [Brownfield with X framework Y version OR Greenfield - new project] **Existing Stack:** [Framework and dependencies OR "To be determined"] **Code Structure:** [Existing patterns OR "New codebase"] This gives me a solid foundation for creating a context-rich tech spec!" loaded_documents_summary project_stack_summary existing_structure_summary Engage {user_name} in natural, adaptive conversation to deeply understand what needs to be built. **Discovery Approach:** Adapt your questioning style to the complexity: - For single-story changes: Focus on the specific problem, location, and approach - For multi-story features: Explore user value, integration strategy, and scope boundaries **Core Discovery Goals (accomplish through natural dialogue):** 1. **The Problem/Need** - What user or technical problem are we solving? - Why does this matter now? - What's the impact if we don't do this? 2. **The Solution Approach** - What's the proposed solution? - How should this work from a user/system perspective? - What alternatives were considered? 3. **Integration & Location** - Where does this fit in the existing codebase? - What existing code/patterns should we reference or follow? - What are the integration points? 4. **Scope Clarity** - What's IN scope for this work? - What's explicitly OUT of scope (future work, not needed)? - If multiple stories: What's MVP vs enhancement? 5. **Constraints & Dependencies** - Technical limitations or requirements? - Dependencies on other systems, APIs, or services? - Performance, security, or compliance considerations? 6. **Success Criteria** - How will we know this is done correctly? - What does "working" look like? - What edge cases matter? **Conversation Style:** - Be warm and collaborative, not interrogative - Ask follow-up questions based on their responses - Help them think through implications - Reference context from Phase 1 (existing code, stack, patterns) - Adapt depth to {{story_count}} complexity Synthesize discoveries into clear, comprehensive specifications. problem_statement solution_overview change_type scope_in scope_out ALL TECHNICAL DECISIONS MUST BE DEFINITIVE - NO AMBIGUITY ALLOWED Use existing stack info to make SPECIFIC decisions Reference brownfield code to guide implementation Initialize tech-spec.md with the rich template **Generate Context Section (already captured):** These template variables are already populated from Step 1: - {{loaded_documents_summary}} - {{project_stack_summary}} - {{existing_structure_summary}} Just save them to the file. loaded_documents_summary project_stack_summary existing_structure_summary **Generate The Change Section:** Already captured from Step 2: - {{problem_statement}} - {{solution_overview}} - {{scope_in}} - {{scope_out}} Save to file. problem_statement solution_overview scope_in scope_out **Generate Implementation Details:** Now make DEFINITIVE technical decisions using all the context gathered. **Source Tree Changes - BE SPECIFIC:** Bad (NEVER do this): - "Update some files in the services folder" - "Add tests somewhere" Good (ALWAYS do this): - "src/services/UserService.ts - MODIFY - Add validateEmail() method at line 45" - "src/routes/api/users.ts - MODIFY - Add POST /users/validate endpoint" - "tests/services/UserService.test.ts - CREATE - Test suite for email validation" Include: - Exact file paths - Action: CREATE, MODIFY, DELETE - Specific what changes (methods, classes, endpoints, components) **Use brownfield context:** - If modifying existing files, reference current structure - Follow existing naming patterns - Place new code logically based on current organization source_tree_changes **Technical Approach - BE DEFINITIVE:** Bad (ambiguous): - "Use a logging library like winston or pino" - "Use Python 2 or 3" - "Set up some kind of validation" Good (definitive): - "Use winston v3.8.2 (already in package.json) for logging" - "Implement using Python 3.11 as specified in pyproject.toml" - "Use Joi v17.9.0 for request validation following pattern in UserController.ts" **Use detected stack:** - Reference exact versions from package.json/requirements.txt - Specify frameworks already in use - Make decisions based on what's already there **For greenfield:** - Make definitive choices and justify them - Specify exact versions - No "or" statements allowed technical_approach **Existing Patterns to Follow:** Document patterns from the existing codebase: - Class structure patterns - Function naming conventions - Error handling approach - Testing patterns - Documentation style Example: "Follow the service pattern established in UserService.ts: - Export class with constructor injection - Use async/await for all asynchronous operations - Throw ServiceError with error codes - Include JSDoc comments for all public methods" "Greenfield project - establishing new patterns: - [Define the patterns to establish]" existing_patterns **Integration Points:** Identify how this change connects: - Internal modules it depends on - External APIs or services - Database interactions - Event emitters/listeners - State management Be specific about interfaces and contracts. integration_points **Development Context:** **Relevant Existing Code:** Reference specific files or code sections developers should review: - "See UserService.ts lines 120-150 for similar validation pattern" - "Reference AuthMiddleware.ts for authentication approach" - "Follow error handling in PaymentService.ts" **Framework/Libraries:** List with EXACT versions from detected stack: - Express 4.18.2 (web framework) - winston 3.8.2 (logging) - Joi 17.9.0 (validation) - TypeScript 5.1.6 (language) **Internal Modules:** List internal dependencies: - @/services/UserService - @/middleware/auth - @/utils/validation **Configuration Changes:** Any config files to update: - Update .env with new SMTP settings - Add validation schema to config/schemas.ts - Update package.json scripts if needed existing_code_references framework_dependencies internal_dependencies configuration_changes existing_conventions Set {{existing_conventions}} = "Greenfield project - establishing new conventions per modern best practices" existing_conventions **Implementation Stack:** Comprehensive stack with versions: - Runtime: Node.js 20.x - Framework: Express 4.18.2 - Language: TypeScript 5.1.6 - Testing: Jest 29.5.0 - Linting: ESLint 8.42.0 - Validation: Joi 17.9.0 All from detected project setup! implementation_stack **Technical Details:** Deep technical specifics: - Algorithms to implement - Data structures to use - Performance considerations - Security considerations - Error scenarios and handling - Edge cases Be thorough - developers need details! technical_details **Development Setup:** What does a developer need to run this locally? Based on detected stack and scripts: ``` 1. Clone repo (if not already) 2. npm install (installs all deps from package.json) 3. cp .env.example .env (configure environment) 4. npm run dev (starts development server) 5. npm test (runs test suite) ``` Or for Python: ``` 1. python -m venv venv 2. source venv/bin/activate 3. pip install -r requirements.txt 4. python manage.py runserver ``` Use the actual scripts from package.json/setup files! development_setup **Implementation Guide:** **Setup Steps:** Pre-implementation checklist: - Create feature branch - Verify dev environment running - Review existing code references - Set up test data if needed **Implementation Steps:** Step-by-step breakdown: For single-story changes: 1. [Step 1 with specific file and action] 2. [Step 2 with specific file and action] 3. [Write tests] 4. [Verify acceptance criteria] For multi-story features: Organize by story/phase: 1. Phase 1: [Foundation work] 2. Phase 2: [Core implementation] 3. Phase 3: [Testing and validation] **Testing Strategy:** - Unit tests for [specific functions] - Integration tests for [specific flows] - Manual testing checklist - Performance testing if applicable **Acceptance Criteria:** Specific, measurable, testable criteria: 1. Given [scenario], when [action], then [outcome] 2. [Metric] meets [threshold] 3. [Feature] works in [environment] setup_steps implementation_steps testing_strategy acceptance_criteria **Developer Resources:** **File Paths Reference:** Complete list of all files involved: - /src/services/UserService.ts - /src/routes/api/users.ts - /tests/services/UserService.test.ts - /src/types/user.ts **Key Code Locations:** Important functions, classes, modules: - UserService class (src/services/UserService.ts:15) - validateUser function (src/utils/validation.ts:42) - User type definition (src/types/user.ts:8) **Testing Locations:** Where tests go: - Unit: tests/services/ - Integration: tests/integration/ - E2E: tests/e2e/ **Documentation to Update:** Docs that need updating: - README.md - Add new endpoint documentation - API.md - Document /users/validate endpoint - CHANGELOG.md - Note the new feature file_paths_complete key_code_locations testing_locations documentation_updates **UX/UI Considerations:** **Determine if this change has UI/UX impact:** - Does it change what users see? - Does it change how users interact? - Does it affect user workflows? If YES, document: **UI Components Affected:** - List specific components (buttons, forms, modals, pages) - Note which need creation vs modification **UX Flow Changes:** - Current flow vs new flow - User journey impact - Navigation changes **Visual/Interaction Patterns:** - Follow existing design system? (check for design tokens, component library) - New patterns needed? - Responsive design considerations (mobile, tablet, desktop) **Accessibility:** - Keyboard navigation requirements - Screen reader compatibility - ARIA labels needed - Color contrast standards **User Feedback:** - Loading states - Error messages - Success confirmations - Progress indicators "No UI/UX impact - backend/API/infrastructure change only" ux_ui_considerations **Testing Approach:** Comprehensive testing strategy using {{test_framework_info}}: **CONFORM TO EXISTING TEST STANDARDS:** - Follow existing test file naming: {{detected_test_patterns.file_naming}} - Use existing test organization: {{detected_test_patterns.organization}} - Match existing assertion style: {{detected_test_patterns.assertion_style}} - Meet existing coverage requirements: {{detected_test_patterns.coverage}} **Test Strategy:** - Test framework: {{detected_test_framework}} (from project dependencies) - Unit tests for [specific functions/methods] - Integration tests for [specific flows/APIs] - E2E tests if UI changes - Mock/stub strategies (use existing patterns: {{detected_test_patterns.mocking}}) - Performance benchmarks if applicable - Accessibility tests if UI changes **Coverage:** - Unit test coverage: [target %] - Integration coverage: [critical paths] - Ensure all acceptance criteria have corresponding tests test_framework_info testing_approach **Deployment Strategy:** **Deployment Steps:** How to deploy this change: 1. Merge to main branch 2. Run CI/CD pipeline 3. Deploy to staging 4. Verify in staging 5. Deploy to production 6. Monitor for issues **Rollback Plan:** How to undo if problems: 1. Revert commit [hash] 2. Redeploy previous version 3. Verify rollback successful **Monitoring:** What to watch after deployment: - Error rates in [logging service] - Response times for [endpoint] - User feedback on [feature] deployment_steps rollback_plan monitoring_approach Always run validation - this is NOT optional! Tech-spec generation complete! Now running automatic validation... Load {installed_path}/checklist.md Review tech-spec.md against ALL checklist criteria: **Section 1: Output Files Exist** - Verify tech-spec.md created - Check for unfilled template variables **Section 2: Context Gathering** - Validate all available documents were loaded - Confirm stack detection worked - Verify brownfield analysis (if applicable) **Section 3: Tech-Spec Definitiveness** - Scan for "or" statements (FAIL if found) - Verify all versions are specific - Check stack alignment **Section 4: Context-Rich Content** - Verify all new template sections populated - Check existing code references (brownfield) - Validate framework dependencies listed **Section 5-6: Story Quality (deferred to Step 5)** **Section 7: Workflow Status (if applicable)** **Section 8: Implementation Readiness** - Can developer start immediately? - Is tech-spec comprehensive enough? Generate validation report with specific scores: - Context Gathering: [Comprehensive/Partial/Insufficient] - Definitiveness: [All definitive/Some ambiguity/Major issues] - Brownfield Integration: [N/A/Excellent/Partial/Missing] - Stack Alignment: [Perfect/Good/Partial/None] - Implementation Readiness: [Yes/No] ⚠️ **Validation Issues Detected:** {{list_of_issues}} I can fix these automatically. Shall I proceed? (yes/no) Fix validation issues? (yes/no) Fix each issue and re-validate ✅ Issues fixed! Re-validation passed. ⚠️ Proceeding with warnings. Issues should be addressed manually. ✅ **Validation Passed!** **Scores:** - Context Gathering: {{context_score}} - Definitiveness: {{definitiveness_score}} - Brownfield Integration: {{brownfield_score}} - Stack Alignment: {{stack_score}} - Implementation Readiness: ✅ Ready Tech-spec is high quality and ready for story generation! Invoke unified story generation workflow: {instructions_generate_stories} This will generate: - **epics.md** - Epic structure (minimal for 1 story, detailed for multiple) - **story-{epic-slug}-N.md** - Story files (where N = 1 to {{story_count}}) All stories reference tech-spec.md as primary context - comprehensive enough that developers can often skip story-context workflow. **✅ Tech-Spec Complete, {user_name}!** **Deliverables Created:** - ✅ **tech-spec.md** - Context-rich technical specification - Includes: brownfield analysis, framework details, existing patterns - ✅ **epics.md** - Epic structure{{#if story_count == 1}} (minimal for single story){{else}} with {{story_count}} stories{{/if}} - ✅ **story-{epic-slug}-1.md** - First story{{#if story_count > 1}} - ✅ **story-{epic-slug}-2.md** - Second story{{/if}}{{#if story_count > 2}} - ✅ **story-{epic-slug}-3.md** - Third story{{/if}}{{#if story_count > 3}} - ✅ **Additional stories** through story-{epic-slug}-{{story_count}}.md{{/if}} **What Makes This Tech-Spec Special:** The tech-spec is comprehensive enough to serve as the primary context document: - ✨ Brownfield codebase analysis (if applicable) - ✨ Exact framework and library versions from your project - ✨ Existing patterns and code references - ✨ Specific file paths and integration points - ✨ Complete developer resources **Next Steps:** **🎯 Recommended Path - Direct to Development:** Since the tech-spec is CONTEXT-RICH, you can often skip story-context generation! {{#if story_count == 1}} **For Your Single Story:** 1. Ask DEV agent to run `dev-story` - Select story-{epic-slug}-1.md - Tech-spec provides all the context needed! 💡 **Optional:** Only run `story-context` (SM agent) if this is unusually complex {{else}} **For Your {{story_count}} Stories - Iterative Approach:** 1. **Start with Story 1:** - Ask DEV agent to run `dev-story` - Select story-{epic-slug}-1.md - Tech-spec provides context 2. **After Story 1 Complete:** - Repeat for story-{epic-slug}-2.md - Continue through story {{story_count}} 💡 **Alternative:** Use `sprint-planning` (SM agent) to organize all stories as a coordinated sprint 💡 **Optional:** Run `story-context` (SM agent) for complex stories needing additional context {{/if}} **Your Tech-Spec:** - 📄 Saved to: `{output_folder}/tech-spec.md` - Epic & Stories: `{output_folder}/epics.md` + `{sprint_artifacts}/` - Contains: All context, decisions, patterns, and implementation guidance - Ready for: Direct development! The tech-spec is your single source of truth! 🚀