Skip to content

Getting Started with Agentic Coding

This guide walks you through setting up and using AI coding agents in the Registration app.

Quick Start

  1. Set up your agent (see Supported Agents below)
  2. Open the project in your IDE or terminal
  3. Paste a task using the template from .ai/task-template.md:
Goal: Add a "Company size" dropdown
Non-goals: Do not change existing fields
Acceptance criteria:
- Dropdown renders with 5 options
- Submitted with the form
- Validation error when empty

The agent explores the code, proposes a plan, and waits for your approval before writing any code. That's it — you're the gatekeeper, the agent does the rest.

Read on for setup details, the full workflow, and tips.


Supported Agents

Agent Instructions source
Claude Code CLAUDE.md imports AGENTS.md
GitHub Copilot Reads AGENTS.md directly

All agent instructions live in a single file: AGENTS.md (project root). Edit only this file.

Prerequisites

Claude Code

  1. Install Claude Code CLI: npm install -g @anthropic-ai/claude-code
  2. Add Playwright MCP for browser verification: claude mcp add playwright -- npx @playwright/mcp@latest
  3. Run claude in the project root — it reads .claude/CLAUDE.md which imports AGENTS.md

GitHub Copilot

  1. Enable GitHub Copilot in your IDE
  2. Copilot reads AGENTS.md from the project root
  3. Add Playwright MCP for browser verification (see repo for IDE-specific setup)

The .ai/ Directory

All agents share the same instruction files in .ai/. These are the "brain" of the agentic workflow:

File Purpose
project-map.md Where code lives
conventions.md Architecture, patterns
task-template.md Template for tasks
stop-and-ask.md When the agent must stop
definition-of-done.md Quality bar checklist
ui-review-checklist.md Visual verification
dependency-policy.md Rules for dependencies

Workflow: How to Give a Task

1. Write the task using the template

Copy .ai/task-template.md and fill in the placeholders:

## Task

Goal: Add a "Company size" dropdown to the form

Non-goals: Do not change existing fields

Context / Inputs: Figma link, API spec

Acceptance criteria:

- Dropdown renders with 5 options
- Selection is submitted with the form
- Validation error shown when no option selected

Key sections:

  • Goal — what's different when done
  • Non-goals — what must NOT change
  • Acceptance criteria — how you verify it
  • UI Scope — routes, breakpoints, states
  • API Contract — method, path, shapes

2. Paste it into the agent

The agent follows a strict loop: Explore → Plan → Implement → Review → Ship

  1. Explore — reads code, builds context
  2. Plan — proposes files, steps, risks. You approve or reject.
  3. Implement — executes the approved plan
  4. Review — runs the DOD checklist (lint, types, tests, browser verification)
  5. Ship — provides a PR summary

You are the gatekeeper at step 2. The agent will not write code without your approval.

3. Review the plan

Before approving, check:

  • Does the plan match your acceptance criteria?
  • Are the right files being touched?
  • Is the scope contained?

If something's off, tell the agent what to change.

4. Let the agent implement and review

After approval, the agent self-reviews against the DOD:

  1. Lint (pnpm run lint)
  2. Type check (pnpm run check)
  3. Format (pnpm run fmt)
  4. Tests (pnpm run test)
  5. E2E tests (if critical flow changed)
  6. Browser verification (if UI changed)
  7. UI review checklist (if UI changed)

5. Ship

The agent produces a PR summary with:

  • What changed and why
  • Verification steps for the reviewer
  • Screenshots from browser verification

Browser Verification

The agent can open a real browser via Playwright MCP to visually verify UI changes. This catches issues that unit tests can't: CSS layout bugs, rendering quirks, broken state transitions, responsive breakage.

The agent will:

  1. Start the dev server
  2. Enable feature toggles if needed
  3. Navigate to the affected route
  4. Test all states (default, error, success)
  5. Check responsive layout (375px, 1280px+)
  6. Capture screenshots as PR evidence

Tips for Better Results

  • Be specific — "Add a dropdown with these 5 options" beats "add a field"
  • Include context — link to Figma, API docs, related tickets
  • Set non-goals — explicitly say what should NOT change
  • Attach screenshots — a picture of the target design eliminates ambiguity
  • Review the plan — highest-leverage moment; catching a wrong approach saves rework
  • Keep tasks focused — one feature per task; don't bundle unrelated changes

Improving the Agentic Workflow

The .ai/ files are living documents. After every task, the agent looks for improvements:

  • A convention that wasn't documented
  • A gotcha that others would hit too
  • A stop-and-ask scenario that was missing
  • Stale or incorrect information

You can also update these files yourself. Better instructions lead to better agent output.