How to Structure an AI-Friendly Codebase
Config-driven architecture, Zod schemas, and CLAUDE.md conventions that let Cursor, Copilot, and Claude generate production-quality code from the first prompt.

"AI-friendly" is one of those phrases that sounds like marketing until you actually compare the prompting experience in a well-structured codebase versus a typical one. The difference is dramatic — not "a little faster," but "10 correct suggestions in a row" versus "constantly fixing hallucinated imports."
This post is the concrete checklist we use.
What "AI-friendly" actually means
It is not about using more AI. It's about structuring your code so that an AI agent, given a typical context window, has enough signal to produce code that compiles, runs, and matches your existing style on the first try.
Three characteristics define it:
- Single source of truth for each concept — one file that owns the schema, the types, and the defaults.
- Predictable file locations — the agent can guess where something lives and be right.
- Explicit conventions in writing — CLAUDE.md, .cursorrules, or AGENTS.md that encode the non-obvious rules.
Config-driven design
The highest-leverage pattern is pushing all customization into config files that are plain data, not code.
// config/features.config.ts
export const featuresConfig = {
ratings: { enabled: true, max: 5 },
comments: { enabled: true, moderation: "auto" },
ai: { enabled: false },
} as constThis flips the common case. Instead of "find every component that renders ratings and add a conditional," you toggle one flag and the agent knows, from the shape of the config, what to check.
Rule of thumb
If an AI agent asks you "where do I toggle X," your codebase isn't config-driven enough. X should live in a single, obvious file.
Zod schemas as the single source of truth
Every config, every API input, every form should start with a Zod schema. The schema gives you:
- Runtime validation (cheap insurance against bad data)
- TypeScript types via
z.infer - Documentation — the schema is the spec
- A target for AI-generated code. "Add a field X of type Y" is unambiguous.
import { z } from "zod"
export const plansConfigSchema = z.object({
free: z.object({ listings: z.number(), featured: z.boolean() }),
pro: z.object({ listings: z.number(), featured: z.boolean() }),
})
export type PlansConfig = z.infer<typeof plansConfigSchema>A codebase with Zod at the boundary lets AI agents propose changes that are statically checkable — they either compile or they don't, no runtime surprises.
Predictable file structure
Here's the shape that works well:
app/ # routes only — thin wrappers
components/ # presentation
ui/ # primitives (shadcn)
landing/ # domain grouping
admin/
config/ # all customization, Zod-validated
lib/ # business logic
[feature]/
content.ts
schema.ts
types/ # shared types (imported elsewhere)
The rule: given a feature name, there is exactly one place to look. No "utils" folder where half the code actually lives. No duplication between lib/ and helpers/.
CLAUDE.md: the contract
Every repo that takes AI collaboration seriously has a top-level CLAUDE.md (or AGENTS.md) with:
- The commands for build, test, lint (with the correct package manager).
- The architectural rules in plain English — "never import from
app/intolib/". - The conventions that aren't obvious from the code — "we use Server Components by default; opt into Client only when you need
useState." - Pointers to the config files.
Keep it under 200 lines. AI agents load it into context on every turn — bloat is expensive.
Naming consistency
AI agents infer patterns. If half your route handlers are handleX and half are xHandler, the agent will mix both and introduce subtle bugs. Pick one and enforce it.
The same applies to file names, component names, hook names, and API route shapes. Internal consistency is worth more than external "best practices."
Tests as specification
Tests are the most efficient form of AI context. A good test says:
- What the input looks like
- What the output should be
- What edge cases matter
When you ask the agent to modify parseSubmission, it'll read parseSubmission.test.ts and know exactly what not to break. Code without tests forces the agent to guess intent.
The measurable outcome
After restructuring a directory-site codebase along these lines:
| Metric | Before | After |
|---|---|---|
| First-prompt correctness | ~40% | ~85% |
| Avg. back-and-forth turns per feature | 6-8 | 2-3 |
| Hallucinated imports per 1k LOC | 3-4 | <1 |
These aren't research numbers. They're what happens when an agent has a codebase where guessing works, because the structure is consistent enough that the guess is the right answer.
Takeaways
- Push customization into typed config files.
- Use Zod as your source of truth.
- Pick a file structure and stick to it ruthlessly.
- Ship a
CLAUDE.mdthat encodes the non-obvious rules.
An AI-friendly codebase is just a readable codebase, taken seriously. The payoff shows up whether the reader is an AI or your new teammate.