Getting Started with Agentic Coding
This guide walks you through setting up and using AI coding agents in the Registration app.
Quick Start
- Set up your agent (see Supported Agents below)
- Open the project in your IDE or terminal
- Paste a task using the template from
.ai/task-template.md:
Goal: Add a "Company size" dropdown
Non-goals: Do not change existing fields
Acceptance criteria:
- Dropdown renders with 5 options
- Submitted with the form
- Validation error when empty
The agent explores the code, proposes a plan, and waits for your approval before writing any code. That's it — you're the gatekeeper, the agent does the rest.
Read on for setup details, the full workflow, and tips.
Supported Agents
| Agent | Instructions source |
|---|---|
| Claude Code | CLAUDE.md imports AGENTS.md |
| GitHub Copilot | Reads AGENTS.md directly |
All agent instructions live in a single file:
AGENTS.md (project root). Edit only this file.
Prerequisites
Claude Code
- Install Claude Code CLI:
npm install -g @anthropic-ai/claude-code - Add Playwright MCP for browser verification:
claude mcp add playwright -- npx @playwright/mcp@latest - Run
claudein the project root — it reads.claude/CLAUDE.mdwhich importsAGENTS.md
GitHub Copilot
- Enable GitHub Copilot in your IDE
- Copilot reads
AGENTS.mdfrom the project root - Add Playwright MCP for browser verification (see repo for IDE-specific setup)
The .ai/ Directory
All agents share the same instruction files in
.ai/. These are the "brain" of the agentic
workflow:
| File | Purpose |
|---|---|
project-map.md |
Where code lives |
conventions.md |
Architecture, patterns |
task-template.md |
Template for tasks |
stop-and-ask.md |
When the agent must stop |
definition-of-done.md |
Quality bar checklist |
ui-review-checklist.md |
Visual verification |
dependency-policy.md |
Rules for dependencies |
Workflow: How to Give a Task
1. Write the task using the template
Copy .ai/task-template.md and fill in the
placeholders:
## Task
Goal: Add a "Company size" dropdown to the form
Non-goals: Do not change existing fields
Context / Inputs: Figma link, API spec
Acceptance criteria:
- Dropdown renders with 5 options
- Selection is submitted with the form
- Validation error shown when no option selected
Key sections:
- Goal — what's different when done
- Non-goals — what must NOT change
- Acceptance criteria — how you verify it
- UI Scope — routes, breakpoints, states
- API Contract — method, path, shapes
2. Paste it into the agent
The agent follows a strict loop: Explore → Plan → Implement → Review → Ship
- Explore — reads code, builds context
- Plan — proposes files, steps, risks. You approve or reject.
- Implement — executes the approved plan
- Review — runs the DOD checklist (lint, types, tests, browser verification)
- Ship — provides a PR summary
You are the gatekeeper at step 2. The agent will not write code without your approval.
3. Review the plan
Before approving, check:
- Does the plan match your acceptance criteria?
- Are the right files being touched?
- Is the scope contained?
If something's off, tell the agent what to change.
4. Let the agent implement and review
After approval, the agent self-reviews against the DOD:
- Lint (
pnpm run lint) - Type check (
pnpm run check) - Format (
pnpm run fmt) - Tests (
pnpm run test) - E2E tests (if critical flow changed)
- Browser verification (if UI changed)
- UI review checklist (if UI changed)
5. Ship
The agent produces a PR summary with:
- What changed and why
- Verification steps for the reviewer
- Screenshots from browser verification
Browser Verification
The agent can open a real browser via Playwright MCP to visually verify UI changes. This catches issues that unit tests can't: CSS layout bugs, rendering quirks, broken state transitions, responsive breakage.
The agent will:
- Start the dev server
- Enable feature toggles if needed
- Navigate to the affected route
- Test all states (default, error, success)
- Check responsive layout (375px, 1280px+)
- Capture screenshots as PR evidence
Tips for Better Results
- Be specific — "Add a dropdown with these 5 options" beats "add a field"
- Include context — link to Figma, API docs, related tickets
- Set non-goals — explicitly say what should NOT change
- Attach screenshots — a picture of the target design eliminates ambiguity
- Review the plan — highest-leverage moment; catching a wrong approach saves rework
- Keep tasks focused — one feature per task; don't bundle unrelated changes
Improving the Agentic Workflow
The .ai/ files are living documents. After every
task, the agent looks for improvements:
- A convention that wasn't documented
- A gotcha that others would hit too
- A stop-and-ask scenario that was missing
- Stale or incorrect information
You can also update these files yourself. Better instructions lead to better agent output.