Using BattleChain with AI
Give your AI coding agent full BattleChain context — skills, machine-readable docs, and MCP server
BattleChain is built for AI-native development. Whether you're using Claude Code, Cursor, or any tool with a built-in chat, you can give your agent full BattleChain context in seconds.
The BattleChain Skill
The fastest way to give your AI agent BattleChain context is to install the Cyfrin skills package:
npx skills add cyfrin/solskill
This installs three skills:
| Skill | What it does |
|---|---|
solidity | Production-grade Solidity standards: code quality, testing, security, Foundry workflows |
battlechain | BattleChain reference: deploying, Safe Harbor, whitehat attacks, contract lifecycle |
battlechain-tutorial | Interactive wizard: scans your project, asks guided questions, generates all scripts |
Once installed, tell your agent "Deploy my contracts to BattleChain" and battlechain-tutorial will walk you through the full process.
The skills are open source at github.com/Cyfrin/solskill.
Claude Code Plugin
If you use Claude Code, the same skills are also published as a plugin marketplace. This is the most native install path for Claude Code — plugins are managed with /plugin and update in place.
Inside a Claude Code session, add the marketplace:
/plugin marketplace add Cyfrin/solskill
Then install whichever plugins you want:
/plugin install solidity@solskill
/plugin install battlechain@solskill
/plugin install battlechain-tutorial@solskill
Restart Claude Code so the new plugins load:
/exit
claude --continue
Claude Code may suggest /reload-plugins after install. That command does not pick up newly installed plugins — exit the session and run claude --continue instead.
Read the Docs as Markdown
BattleChain publishes machine-readable versions of these docs so AI agents can ingest them directly:
| File | Contents | Size |
|---|---|---|
/llms.txt | Table of contents with page titles, descriptions, and links | ~4 KB |
/llms-full.txt | Complete text of every page as clean markdown | ~100 KB |
Both files follow the llms.txt convention and are regenerated on every deploy.
Start a conversation with full BattleChain docs loaded as context:
Or paste this prompt into any AI:
Read https://docs.battlechain.com/llms-full.txt and use it to answer my questions about BattleChain.
Cursor
Add BattleChain as a doc source so Cursor indexes it automatically. Go to Cursor Settings → Features → Docs, click Add, and enter:
https://docs.battlechain.com/llms-full.txt
Once indexed, Cursor's AI will have full BattleChain context when you reference @docs in chat.
MCP Server
BattleChain publishes an MCP (Model Context Protocol) server that gives AI tools programmatic access to search and read these docs.
Server URL:
https://docs.battlechain.com/api/mcp
Claude Code
claude mcp add --transport http battlechain-docs https://docs.battlechain.com/api/mcp
Claude Desktop
Add to your Claude Desktop config (claude_desktop_config.json):
{
"mcpServers": {
"battlechain-docs": {
"url": "https://docs.battlechain.com/api/mcp"
}
}
}
Any MCP-compatible client can connect using the server URL above. The server exposes three tools:
| Tool | Args | Description |
|---|---|---|
search_docs | query: string | Search documentation by keyword or topic |
read_page | path: string | Read the full content of a specific page as clean markdown |
list_pages | (none) | List all available documentation pages |
Example prompts
Once connected, just ask your agent a docs question — it picks the right tool automatically:
- "How do I deploy a contract to BattleChain testnet?"
- "What is Safe Harbor and how do I enable it on my contracts?"
- "Walk me through the going-attackable tutorial step by step."
- "List every page under quickstart."
Custom Agents
If you're building an agent that interacts with BattleChain, fetch the docs at startup and include them as context:
- Full context — fetch
/llms-full.txtand add it to the system prompt (~100 KB, fits in most context windows) - Selective context — fetch
/llms.txtto build a page index, then retrieve individual pages on demand for RAG pipelines
import httpx
docs = httpx.get("https://docs.battlechain.com/llms-full.txt").text
messages = [
{"role": "system", "content": f"BattleChain documentation:\n\n{docs}"},
{"role": "user", "content": user_question},
]
Going Further
Want your AI agent to deploy to BattleChain by default and set up Safe Harbor automatically? See Configure Your AI Tools for the deployment prompt block and per-tool config file paths (CLAUDE.md, .cursor/rules, AGENTS.md, and others).