Free · 12 checks · ~5 min
AI Codebase Readiness Checklist
Use this before you onboard real users, raise capital, or hand the repo to engineers. Each check is a yes/no question. Count your "no" answers — the more you have, the closer your MVP is to expensive rework.
- AGENTS.md — cross-tool standard for agent instructions.
- CLAUDE.md — Claude Code-specific repo guide.
- .cursor/rules/ — Cursor rule files scoped to areas of the repo.
01Does your repo have agent-readable context files?▾
Coding agents produce wildly different output depending on whether you've told them how this product is supposed to be built. Look for:
If these don't exist (or are still default boilerplate), every new task is a coin flip on conventions.
02Is the project structure documented somewhere a human or agent can find?▾
One short docs/architecture.md describing the top-level folders, the boundary between frontend and backend, where business logic lives, and which directories are "no-go" zones. If a new contributor (human or AI) can't answer "where does X live?" in 30 seconds, this check fails.
03Is your database schema reviewable in one place?▾
Either a single migration history that tells the story, or a docs/database.md summary of tables, columns, relationships, and the reasoning. Agents are particularly bad at noticing when they invent a third "users" table — explicit schema docs prevent that.
04Are migrations forward-only and reversible?▾
No "edit a previous migration to fix the schema" folklore. Every change is its own migration. You can roll a database forward and back without manual SQL. If you've never run a real migration in a staging environment, you don't know yet — this is a "no".
05Is auth implemented through one library, not three half-finished attempts?▾
Agents love to invent auth flows. Pick one (Clerk, Auth.js, Supabase Auth, Lucia, Cognito, your own JWT layer — any one of them) and make sure all routes go through it. Multiple auth helpers in the same repo is a "no".
06Are secrets segregated from code and from production env files?▾
No live API keys or passwords in committed files. .env.example exists with safe placeholders, real .env is gitignored, and production secrets live in a real secret store (1Password, Doppler, AWS/GCP Secret Manager, Vercel/Netlify env, etc.). Bonus: there is a written process for rotating them.
07Is there a consistent error-handling and logging pattern?▾
One way to throw, one way to catch, one way to log. Errors that surface to users have predictable shapes; logs include enough context to debug without reproducing the bug. If different routes handle errors differently, you'll spend the next quarter chasing inconsistencies.
08Do you have at least smoke-level tests on the critical path?▾
Not "100% coverage" — that's overkill for an MVP. Just: signup → core action → critical write → critical read. If those four still pass after a refactor, you can ship with reasonable confidence. If you have zero tests, every AI-generated change is a guess.
09Is deployment one command (or one merge)?▾
Push to main, deploy fires; or run deploy and the right thing happens. No "I need to remember to also …". Manual steps are where AI-coded MVPs die when traffic appears.
10Have the basics of OWASP been considered?▾
SQL injection (parameterized queries, not string concatenation), XSS (escaped output by default), CSRF (tokens or SameSite cookies), rate limiting on auth endpoints, no PII in logs. Agents will happily write a login route that lets anyone log in as anyone. A 30-minute review catches most of it.
11Is "done" defined somewhere?▾
A short docs/definition-of-done.md — what tests must pass, what docs must be updated, what manual checks are required before merging. Without it, agents declare any green build "done" and you ship features that haven't been validated.
12Could a stranger continue this project tomorrow?▾
Imagine a developer joining for one week, no calls. Can they: clone, install, run locally, find the architecture doc, find the database doc, deploy a small change? If the honest answer is "no, they'd need me to walk them through it", the codebase is not handoff-ready — and any rescue/rewrite quote you get from a dev shop will reflect that.
How to read your score
0–2 nos → Solid foundation. You're in the minority of vibe-coded MVPs.
3–5 nos → Fixable in a focused week. Most growth-stage problems will trace back here.
6+ nos → The codebase is a stronger candidate for a Setup Sprint than for new features. Ship caution.