๐Ÿ“ฆ f / check-ai

Audit any repository for AI-readiness.

โ˜… 52 stars โ‘‚ 1 forks ๐Ÿ‘ 52 watching โš–๏ธ MIT License
agentsai-readiness
๐Ÿ“ฅ Clone https://github.com/f/check-ai.git
HTTPS git clone https://github.com/f/check-ai.git
SSH git clone git@github.com:f/check-ai.git
CLI gh repo clone f/check-ai
Loading files...
๐Ÿ“„ README.md

check-ai logo

npx check-ai

Audit any repository for AI-readiness.

AI Ready

One command. 66 checks. Zero dependencies. Scans for agent configs, repo hygiene, grounding docs, testing safety nets, prompt templates, MCP integrations, AI dependencies โ€” and scores it all on a 0โ€“10 scale.

๐Ÿงน Repo Hygiene  โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘  77% (26/34)
  ๐Ÿ“„ Grounding Docs โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘  65% (15/23)
  ๐Ÿงช Testing       โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘  90% (9/10)
  ๐Ÿค– Agent Configs  โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘  75% (55/73)
  ๐Ÿ”’ AI Context    โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘  40% (6/15)
  ๐Ÿงฉ Prompts       โ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘  28% (5/18)
  ๐Ÿ”Œ MCP           โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  100% (11/11)
  ๐Ÿ“ฆ AI Deps       โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ  100% (4/4)

  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

   A   Strong โ€” AI-ready

  โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘โ–‘  7.8/10
  38 of 66 checks passed ยท 131/188 pts

  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€

Install & Run

npx check-ai

Scan a specific repo:

npx check-ai /path/to/repo

Options

FlagDescription
--jsonMachine-readable JSON output
--verbose, -vInclude low-priority (nice-to-have) recommendations
--no-interactiveDisable animated output (auto-detected in CI / pipes)
--ciAlias for --no-interactive
-h, --helpShow help
--versionShow version

CI Integration

check-ai exits with code 1 when the score is below 3/10, so you can use it as a CI gate:

# GitHub Actions
- name: AI Readiness Check
  run: npx check-ai

# GitLab CI
ai-audit:
  script: npx check-ai --ci

JSON Output

Pipe results into other tools or dashboards:

npx check-ai --json | jq '.score'

{
  "score": 7.8,
  "grade": "A",
  "label": "Strong โ€” AI-ready",
  "checks": { "passed": 38, "total": 66 },
  "sections": { ... },
  "findings": [ ... ]
}


What It Audits

check-ai runs 66 checks grouped into 8 sections. Each check has a weight based on real-world impact.

๐Ÿงน Repo Hygiene

A clean, well-structured repo is the foundation for AI agents to work effectively.

CheckWhat it looks for
Git repo.git directory
Gitignore.gitignore
Env example.env.example, .env.sample, .env.template
Editor config.editorconfig
LinterESLint, Pylint, Ruff, RuboCop, golangci-lint configs
FormatterPrettier, Biome, deno fmt, clang-format, rustfmt configs
CI pipelineGitHub Actions, GitLab CI, CircleCI, Jenkins, Travis, Bitbucket Pipelines
Standard scriptsstart, test, lint in package.json or Makefile
Dev container.devcontainer/ for reproducible environments

๐Ÿ“„ Grounding Docs

Documentation that helps AI agents understand what your project is and how it works.

CheckWhat it looks for
READMEREADME.md
README qualityChecks for install instructions, usage, structure, code blocks, headings
Contributing guideCONTRIBUTING.md
Architecture docarchitecture.md, ARCHITECTURE.md, docs/architecture.md
Tech stack doctech-stack.md, docs/tech-stack.md
AI requirements.ai/requirements, .ai/docs, docs/prd
llms.txtllms.txt, llms-full.txt (the llms.txt standard)

๐Ÿงช Testing Safety Net

Tests catch agent-introduced regressions before they ship.

CheckWhat it looks for
Test directorytests/, test/, __tests__/, spec/, e2e/, cypress/, playwright/
Test runner configJest, Vitest, Playwright, Cypress, pytest, RSpec configs
Coverage confignyc, c8, coveragerc, Codecov configs

๐Ÿค– Agent Configs

The core of AI-readiness. Having at least one AI tool configured earns a large bonus โ€” because in practice, teams use one tool (Cursor or Windsurf or Claude Code), not all of them at once.

CheckWhat it looks for
At least one AI toolAny tool-specific config found (big bonus)
AGENTS.mdUniversal cross-tool agent instructions (agents.md)
AGENTS.md qualityContent analysis: build commands, test instructions, style guide, code examples
Nested AGENTS.mdDeep scan for per-module AGENTS.md files
.agents/Agent assets directory (skills, plans)
Claude CodeCLAUDE.md, .claude/, .claude/settings.json
Cursor.cursorrules, .cursor/rules/
Windsurf.windsurfrules (legacy), .windsurf/rules/ (new), .windsurf/skills/, .windsurf/workflows/
GitHub Copilot.github/copilot-instructions.md, .github/instructions/
OpenAI Codex.codex/, CODEX.md
Google Gemini.gemini/
Aider.aider.conf.yml
Roo Code.roo/
Continue.continue/, .continuerc.json
Amp (Sourcegraph)Reads AGENTS.md (counted via AGENTS.md check)
JetBrains Junie.junie/, .junie/guidelines.md
Entire HQ.entire/ (captures AI agent sessions per git push)
OpenCodeopencode.json, .opencode/ (agents, commands, skills, plugins)
Zed.rules
Trae.trae/rules/
Cline.clinerules

๐Ÿ”’ AI Context

Files that control what AI agents can and cannot see.

CheckWhat it looks for
Cursor ignore.cursorignore
Cursor indexing ignore.cursorindexingignore
AI ignore.aiignore, .aiexclude
CodeRabbit.coderabbit.yaml
Copilot ignore.copilotignore
Codeium ignore.codeiumignore
Instruction filesDeep scan for .instructions.md files

๐Ÿงฉ Prompts & Skills

Reusable prompt templates and agent skill definitions.

CheckWhat it looks for
Prompt templates (.yml)Deep scan for .prompt.yml files
Prompt templates (.md)Deep scan for .prompt.md files
Prompts directoryprompts/, .prompts/, .ai/prompts/
SkillsDeep scan for SKILL.md files
Claude commands.claude/commands/

๐Ÿ”Œ MCP (Model Context Protocol)

Tool integrations that extend agent capabilities.

CheckWhat it looks for
MCP config.mcp.json, mcp.json
MCP server countParses config and counts configured servers
MCP directory.mcp/

๐Ÿ“ฆ AI Dependencies

Detects AI SDK usage in your project.

CheckWhat it looks for
AI SDKsScans package.json, requirements.txt, pyproject.toml for OpenAI, Anthropic, LangChain, Vercel AI SDK, Google AI, Hugging Face, MCP SDK, vector DBs, tokenizers, and more (~40 packages)

How Scoring Works

Each check has a weight based on how much it impacts AI-readiness.

The raw score is normalized to a 0โ€“10 scale:

GradeScoreVerdict
A+9โ€“10Exemplary โ€” fully AI-ready
A7โ€“9Strong โ€” AI-ready
B5โ€“7Decent โ€” partially AI-ready
C3โ€“5Weak โ€” minimal AI setup
D1โ€“3Poor โ€” barely AI-aware
F0โ€“1None โ€” not AI-ready

Scoring Philosophy

  • Having any one AI tool configured earns a big bonus. People use Cursor or Windsurf or Claude Code โ€” not all at once. The tool doesn't penalize you for picking one.
  • AGENTS.md is weighted highest among individual checks because it's the universal, cross-tool standard.
  • Content quality matters, not just file existence. AGENTS.md and README.md are analyzed for real signals like build commands, test instructions, code examples, and headings.
  • Deep scanning walks your file tree (up to 6 levels) to find nested AGENTS.md, .prompt.yml, SKILL.md, and .instructions.md files.

Interactive Mode

When run in a terminal (TTY), check-ai shows:

  • Spinner with live progress during scanning
  • Animated score bar that fills in real-time
  • Section-by-section reveal with staggered items
Automatically falls back to static output when piped or in CI environments.


Zero Dependencies

Built entirely with Node.js built-ins (fs, path, readline). No install required beyond npx. Works offline โ€” no network calls, pure static analysis.

License

MIT