AI Council Documentation
Everything you need to know to get the most out of AI Council.
Installation
AI Council is distributed as an npm package with pre-compiled binaries for all major platforms.
npm install -g @mugzie/ai-council
This automatically installs the correct binary for your platform:
- macOS - darwin-arm64 (Apple Silicon) or darwin-x64 (Intel)
- Linux - linux-arm64 or linux-x64
- Windows - win32-x64
Run ai-council --help to verify the installation was successful.
Claude Code CLI & Agent SDK (Optional)
To use Agent Review mode (--agent), you need the Claude Code CLI and the Claude Agent SDK.
npm install -g @anthropic-ai/claude-code
npm install @anthropic-ai/claude-agent-sdk
You will also need an Anthropic API key. Set it in your environment:
export ANTHROPIC_API_KEY="sk-ant-..."
Claude Code and the Agent SDK are only needed for --agent mode. All other review modes (review, security, perf, arch, sanity, --deep-analysis) work with just an OpenAI and/or Gemini API key.
License Setup
AI Council requires a valid license key to operate. Purchase your license from our store.
Purchase LicenseOnce you have your license key, set it as an environment variable:
export AI_COUNCIL_LICENSE_KEY="your-license-key"
# AI Council License
export AI_COUNCIL_LICENSE_KEY="your-license-key"
Quick Start
Get up and running with AI Council in under a minute.
Install the package
npm install -g @mugzie/ai-council
Set your license key
export AI_COUNCIL_LICENSE_KEY="your-key"
Configure AI provider (at least one required)
# OpenAI (recommended)
export OPENAI_API_KEY="sk-..."
# Or Google Gemini
export GEMINI_API_KEY="..."
# Or both for best results!
Run your first review
ai-council review --diff --branch=main --pretty
Basic Usage
AI Council analyzes your git diff and provides recommendations from multiple AI perspectives.
# Review changes against main branch
ai-council review --diff --branch=main --pretty
# Review changes against develop branch
ai-council review --diff --branch=develop --pretty
# Review only staged changes
ai-council review --staged --pretty
# Review only unstaged changes
ai-council review --unstaged --pretty
# Review both staged and unstaged changes
ai-council review --all --pretty
Git Targeting Options
AI Council provides flexible options for targeting specific commits, branches, and commit ranges for review.
| Flag | Description | Example |
|---|---|---|
--branch=<name> |
Compare current HEAD against a branch | --branch=main |
--commit=<hash> |
Review a specific commit | --commit=abc123 |
--range=<from>..<to> |
Review a range of commits | --range=main~5..main |
--review-branch=<name> |
Review all commits on a branch vs its merge-base | --review-branch=feature |
--base=<name> |
Base branch for --review-branch (default: main) |
--base=develop |
--staged |
Review only staged changes (git diff --cached) |
--staged |
--unstaged |
Review only unstaged changes (git diff) |
--unstaged |
--all |
Review both staged and unstaged changes (git diff HEAD) |
--all |
# Review staged/unstaged git changes (auto-detect)
ai-council review --pretty
# Explicitly review only staged changes
ai-council review --staged --pretty
# Explicitly review only unstaged changes
ai-council review --unstaged --pretty
# Review both staged and unstaged changes
ai-council review --all --pretty
# Review current HEAD vs main branch
ai-council review --branch=main --pretty
# Review a specific commit
ai-council review --commit=abc123 --pretty
# Review the last 5 commits on main
ai-council review --range=main~5..main --pretty
# Review all commits on a feature branch
ai-council review --review-branch=feature --pretty
# Review feature branch against develop (instead of main)
ai-council review --review-branch=feature --base=develop --pretty
When multiple git options are provided, they are processed in this order: --commit > --range > --review-branch > --branch. If none are specified, working-tree changes are reviewed with a waterfall fallback: staged → unstaged → HEAD. Use --staged, --unstaged, or --all to override the auto-detect.
When using --branch or --review-branch, if the branch doesn't exist as a local ref, AI Council automatically tries origin/<branch>. This is useful in CI environments or shallow clones where only remote tracking branches are available.
Output Formats
Control how results are displayed:
| Flag | Description |
|---|---|
--pretty |
Human-readable formatted output with colors and emojis |
--json |
JSON output for programmatic use |
--ci |
CI mode - exits with code 1 on REJECT or low confidence |
--suggestions |
Ask agents for code fix examples when they vote REVISE or REJECT (see Fix Suggestions) |
--pre-context |
Enrich reviewer prompts with AST-extracted, embedding-ranked code context (see Pre-Review Context) |
--pre-context-top-k=<n> |
Number of top code units to inject (implies --pre-context, default: 5) |
--deep-analysis |
File-by-file deep review with structured findings, project-rule awareness, and line-level detail (see Deep Analysis) |
--agent |
Two-pass Claude Agent SDK review with codebase exploration; implies --deep-analysis (see Agent Review) |
Review Commands
AI Council offers specialized review modes that focus on specific aspects of your code.
ai-council review
Comprehensive code review with all agents: Senior Developer, Security, Performance, Pragmatist, and Gemini.
ai-council review --diff --pretty
ai-council security
Security-focused review examining vulnerabilities, authentication, and secure coding practices.
ai-council security --branch=main --pretty
ai-council perf
Performance analysis focusing on algorithmic complexity, bottlenecks, and optimization opportunities.
ai-council perf --commit=abc123 --pretty
ai-council arch
Architecture review evaluating design patterns, scalability, and system structure.
ai-council arch --review-branch=feature --pretty
ai-council sanity
Quick sanity check with Pragmatist, Senior Dev, and Architect for a balanced perspective.
ai-council sanity --range=main~3..main --pretty
ai-council decide
Advanced tool for custom queries. Choose your agent set and provide a custom question for the council to evaluate.
ai-council decide --agents=security --branch=main --question="Is this auth flow secure?"
Interactive Chat
Run an interactive chat session where you can ask the council multiple questions in a row. Choose an agent set once, then type questions and see verdicts with rationale and agent votes.
ai-council chat
With git context (same options as review commands):
ai-council chat --branch=main
ai-council chat --commit=abc123
ai-council chat --range=main~5..main
ai-council chat --review-branch=feature
At startup you choose the agent set: review, security, perf, arch, or sanity. Then type your question and press Enter. Type exit or quit to end the session.
You can request fix suggestions per-message by including phrases like "with suggestions", "fix examples", or "code examples" in your question—or the AI can infer when you want them. Use --suggestions to always request them. See Fix Suggestions for full details.
Chat is useful for iterating on the same codebase or branch—e.g. "Should we cache this?" then "What if we add a TTL?"—without re-running a full review each time.
Fix Suggestions
When agents vote REVISE or REJECT, they can optionally provide concrete code fix examples. Suggestions are syntax-highlighted and prettified in the terminal, covering JavaScript, TypeScript, Python, Go, Rust, Java, Ruby, PHP, SQL, Shell, and more.
CLI Usage
Add --suggestions to any review command:
# Review with fix examples from dissenting agents
ai-council review --diff --branch=main --suggestions
# Security review with suggestions
ai-council security --branch=main --suggestions
# Chat with suggestions always on
ai-council chat --suggestions
When --suggestions is used with review commands, the output switches from raw JSON to a human-readable format showing the verdict, rationale, agent votes, and fix suggestions in styled boxes.
Chat Mode
In chat mode, you can request suggestions in three ways:
--suggestionsflag — always request suggestions for every turn.- Trigger phrases — include phrases like "with suggestions", "fix examples", "code examples", or "with examples" in your message.
- AI inference — the tool can automatically detect when your message implies you want fix examples (e.g. "how would I fix this?") and request them for that turn.
Suggestions are AI-generated, sanitized for display only, and never executed by the tool. They are not guaranteed correct or safe—always review and validate before applying.
MCP / Programmatic
All MCP tools accept an optional suggestions boolean parameter. When true, agents include a "suggestion" field in their vote when they recommend REVISE or REJECT.
Suggestion Rate Limiting (Chat)
When no explicit flag or trigger phrase is used, the AI classifier determines whether to request suggestions. This classifier is rate-limited to prevent excessive API calls. Configure via environment variables:
| Variable | Default | Description |
|---|---|---|
AI_COUNCIL_SUGGESTION_CLASSIFY_MAX |
20 |
Maximum classify calls per window |
AI_COUNCIL_SUGGESTION_CLASSIFY_WINDOW_SECONDS |
60 |
Window duration in seconds |
If chat reports a suggestion rate limit, either increase the env vars above or use --suggestions to always request them.
Pre-Review Context
An opt-in preprocessing stage that runs before the LLM reviewers, extracting and ranking the most relevant code units from your diff or input to give agents better context.
How It Works
- AST Extraction — Uses tree-sitter to parse TypeScript/JavaScript into an AST and extract semantic code units (functions, classes, methods, interfaces, type aliases). Falls back to line-based chunking for unsupported languages.
- Embedding Ranking — Each extracted unit is embedded using CodeBERT and ranked by cosine similarity to the review question.
- Context Injection — The top-k most relevant code units are injected into each reviewer agent's prompt, giving them focused structural context beyond the raw diff.
The entire pipeline is fail-open: if AST extraction, embedding, or any step fails or times out, the review proceeds normally without enriched context.
CLI Usage
# Enable with default settings (top 5 units)
ai-council review --branch=main --pre-context --pretty
# Customize the number of code units injected
ai-council review --branch=main --pre-context-top-k=10 --pretty
# Works with all review types
ai-council security --branch=main --pre-context --pretty
ai-council arch --review-branch=feature --pre-context --pretty
MCP / Programmatic
All MCP tools accept:
preContext(boolean) — enable the AST + embedding pipeline.preContextTopK(number) — number of top code units to inject (impliespreContext, default: 5).
Configuration
Fine-tune the pipeline via environment variables:
| Variable | Default | Description |
|---|---|---|
AI_COUNCIL_PRE_CONTEXT |
false |
Enable pre-review context globally (true or 1) |
AI_COUNCIL_PRE_CONTEXT_TOP_K |
5 |
Number of top code units to inject |
AI_COUNCIL_PRE_CONTEXT_MAX_CHARS |
1500 |
Max characters per code snippet |
AI_COUNCIL_PRE_CONTEXT_TIMEOUT |
15000 |
Timeout in ms for the entire pipeline |
Pre-review context is most useful for large diffs where agents might otherwise miss important structural relationships. For small, focused diffs the raw diff alone is usually sufficient.
Deep Analysis
A file-by-file review pipeline that produces structured, line-specific findings instead of the standard single-prompt council vote. Each file in the diff is reviewed individually by a dedicated Deep Reviewer agent, then results are aggregated by the Judge.
How It Works
- Diff Parsing — The unified diff is parsed into per-file patches. Binary files, snapshots,
.d.ts,dist/, sourcemaps, and minified files are automatically skipped. - Full-File Context — For each changed file, the full post-change file content is read from disk and included alongside the patch so the LLM has surrounding context, not just hunks (capped at 60k chars per file).
- Parallel File-Level Review — Files are reviewed in parallel batches (8 concurrent) by the Deep Reviewer agent. Each review returns structured JSON with a summary, recommendation (APPROVE / REVISE / REJECT), confidence score, and an array of findings. If JSON parsing fails, a retry prompt is sent automatically.
- Concrete Code Findings — Every finding includes the file path, line number, severity, issue description, plus the exact
problematicCodeand a concreterecommendedCodeChangeshowing the fix. - Aggregation — The Judge agent reviews all per-file results and produces a single overall recommendation, weighting high-severity findings heavily.
Severity Levels
| Severity | Meaning | Examples |
|---|---|---|
high |
Bug, security hole, or data-loss risk | Unvalidated input, race condition, null dereference |
medium |
Logic issue, missing edge case, testability problem | Off-by-one error, unchecked promise, missing error handling |
low |
Minor improvement or readability concern | Naming, dead code, redundant cast |
CLI Usage
# Deep file-by-file review against main
ai-council review --branch=main --deep-analysis --pretty
# Combine with suggestions for fix examples
ai-council review --branch=main --deep-analysis --suggestions --pretty
# Works with all git targeting options
ai-council review --staged --deep-analysis --pretty
ai-council review --review-branch=feature --deep-analysis --pretty
When --deep-analysis is used, the output includes a structured findings table showing each file reviewed, its recommendation, and line-level issues with severity and suggestions.
Project-Rule Injection
AI Council automatically discovers and injects project-level rules and conventions into all review prompts (both standard and deep analysis). Rules are loaded from these locations in your repository root:
.cursorrules.cursor/rules/(all files, recursive).claude/rules/(all files, recursive)
The combined rules text is capped at 8,000 characters to avoid blowing up context windows. If no rule files are found, the review proceeds normally without injected rules.
If your project already has .cursorrules or .cursor/rules/ files, AI Council picks them up automatically—no extra configuration needed.
MCP / Programmatic
All MCP tools accept an optional deepAnalysis boolean parameter. When true, the review is routed through the file-by-file deep analysis pipeline and the result includes fileReviews and findings arrays.
Deep analysis sends one LLM request per file in the diff, plus an aggregation request. For large diffs with many files, this can significantly increase API usage compared to the standard single-prompt review.
Agent Review
A two-pass review pipeline powered by the Claude Agent SDK that gives Claude full codebase access to trace callers, read files, check tests, and find dead code—not just review the diff.
How It Works
- Pass 1 — Surface Scan — All diffs are fed to Claude with no tool access. Claude produces a high-level overview, a priority list of files needing scrutiny, and any immediate risks visible from the diffs alone.
- Pass 2 — Deep File Analysis — For each changed file, a new session is forked from Pass 1 (inheriting that context) and Claude is given
Read,Grep, andGlobtool access. Claude actively explores the codebase: reads full files, searches for callers of changed exports, traces data flow, checks tests, and validates error handling. Reviews run in parallel batches (8 concurrent). - Aggregation — Identical to Deep Analysis: the Judge aggregates all per-file results into a single recommendation.
CLI Usage
# Two-pass agent review against main
ai-council review --agent --branch=main --pretty
# Agent review of staged changes
ai-council review --agent --staged --pretty
# Combine with suggestions
ai-council review --agent --branch=main --suggestions --pretty
The --agent flag implies --deep-analysis, so the output uses the same structured findings format with per-file reviews, severity levels, and concrete code suggestions.
Prerequisites
Agent review requires two additional dependencies:
- Claude Agent SDK — Install with
npm install @anthropic-ai/claude-agent-sdk - Claude Code CLI — See the Anthropic docs for installation
If the SDK is not installed, the command fails with a clear error message and installation instructions.
MCP / Programmatic
All MCP tools accept an optional agent boolean parameter. When true, the review is routed through the two-pass Claude Agent SDK pipeline (implies deepAnalysis).
Agent review is the most thorough but also the most expensive mode. Each file gets a multi-turn Claude session with tool use (up to 15 turns per file). Use it for critical reviews where codebase-aware analysis justifies the cost.
Agent review excels when changes affect exports consumed by other files, modify shared utilities, or touch code with complex call chains. For self-contained changes, Deep Analysis is faster and cheaper.
Utility Commands
AI Council provides helpful utility commands for setup, debugging, and information.
ai-council tools
Lists all available MCP tools with their parameters. Useful for understanding what's available when integrating with Claude or Cursor.
ai-council tools
Shows: review, security, perf, arch, sanity (analysis tools) and decide (advanced custom queries)
ai-council test
Tests API connectivity for all configured providers. Verifies your API keys are valid and working.
ai-council test
🔍 Testing API connectivity...
✅ OpenAI API: Connected (gpt-4o-mini)
✅ Gemini API: Connected (gemini-2.0-flash)
All APIs operational!
ai-council chat
Start an interactive chat session: choose an agent set, then ask questions and get verdicts in a loop. Supports the same git options as review (--branch, --commit, --range, --review-branch). Type exit or quit to end.
ai-council chat
ai-council chat --branch=main
See Interactive Chat for full details.
ai-council --version
Display the installed version number.
ai-council --version
ai-council -v
ai-council --help
Comprehensive help with all commands, MCP tools, environment variables, and usage examples.
ai-council --help
If you're getting errors, run ai-council test first to verify your API keys are configured correctly and the connections are working.
CI/CD Mode
Integrate AI Council into your continuous integration pipeline to automatically review pull requests.
name: AI Council Review
on:
pull_request:
branches: [main, develop]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install AI Council
run: npm install -g @mugzie/ai-council
- name: Run Code Review
env:
AI_COUNCIL_LICENSE_KEY: ${{ secrets.AI_COUNCIL_LICENSE_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
# Optional: Add Gemini for diverse AI perspectives
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
# Optional: Customize models (defaults shown)
AI_COUNCIL_MODEL: gpt-4o-mini
AI_COUNCIL_GEMINI_MODEL: gemini-2.0-flash
run: ai-council review --diff --branch=origin/main --ci
Adding Gemini
For the most comprehensive reviews, add your Gemini API key to enable the Gemini Structural Thinker agent. This provides diverse AI perspectives and helps catch edge cases that a single provider might miss.
Customizing Models
You can customize which models power your council by setting environment variables:
| Variable | Default | Description |
|---|---|---|
AI_COUNCIL_MODEL |
gpt-4o-mini |
OpenAI model for most agents. Use gpt-4o for higher quality reviews. |
AI_COUNCIL_GEMINI_MODEL |
gemini-2.0-flash |
Gemini model for the Gemini Thinker. Use gemini-1.5-pro for deeper analysis. |
The default models (gpt-4o-mini and gemini-2.0-flash) offer a good balance of speed and cost. For critical codebases, consider using gpt-4o and gemini-1.5-pro for higher quality reviews.
AI Council exits with code 1 when overall confidence is below 0.65. In CI mode (--ci), it also exits 1 when the final decision is REJECT, or when the decision is REVISE and confidence is below 0.75. Use this to block merges in pipelines.
MCP Server
AI Council includes a Model Context Protocol (MCP) server for integration with AI assistants like Claude and Cursor.
ai-council mcp
The MCP server exposes AI Council's functionality as tools that can be invoked by AI assistants, enabling them to request code reviews during conversations.
All analysis tools accept optional parameters:
gitScope— which working-tree changes to include:"auto"(default; staged → unstaged → HEAD fallback),"staged","unstaged", or"all".suggestions— whentrue, agents include code fix examples in their vote when they recommend REVISE or REJECT. See Fix Suggestions.preContext— whentrue, enrich reviewer prompts with AST-extracted, embedding-ranked code context. See Pre-Review Context.preContextTopK— number of top code units to inject (impliespreContext, default: 5).deepAnalysis— whentrue, use file-by-file deep review with structured findings and project-rule awareness. See Deep Analysis.agent— whentrue, use two-pass Claude Agent SDK review with codebase exploration (impliesdeepAnalysis). See Agent Review.cwd— working directory for git commands (defaults to process cwd). Useful when the assistant is not running in the repo root.
Claude Integration
Add AI Council to Claude Code for seamless code review within your Claude conversations.
claude mcp add --transport stdio ai-council -- ai-council mcp
Once configured, Claude can invoke AI Council to review code snippets or git diffs during your conversation.
Cursor Integration
Configure AI Council as an MCP server in Cursor for integrated code review.
{
"mcpServers": {
"ai-council": {
"command": "ai-council",
"args": ["mcp"],
"env": {
"AI_COUNCIL_LICENSE_KEY": "your-license-key",
"OPENAI_API_KEY": "your-openai-key",
"GEMINI_API_KEY": "your-gemini-key",
"AI_COUNCIL_MODEL": "gpt-4o-mini",
"AI_COUNCIL_GEMINI_MODEL": "gemini-2.0-flash"
}
}
}
}
Place the .mcp.json file in your project root or home directory for Cursor to automatically detect it.
Environment Variables
Configure AI Council's behavior through environment variables.
| Variable | Required | Default | Description |
|---|---|---|---|
AI_COUNCIL_LICENSE_KEY |
Yes | - | Your license key from the store |
OPENAI_API_KEY |
No* | - | OpenAI API key for GPT models |
GEMINI_API_KEY |
No* | - | Google Gemini API key |
AI_COUNCIL_MODEL |
No | gpt-4o-mini |
OpenAI model to use |
AI_COUNCIL_GEMINI_MODEL |
No | gemini-2.0-flash |
Gemini model to use |
AI_COUNCIL_SUGGESTION_CLASSIFY_MAX |
No | 20 |
Max suggestion-classify calls per window in chat |
AI_COUNCIL_SUGGESTION_CLASSIFY_WINDOW_SECONDS |
No | 60 |
Window in seconds for the above |
AI_COUNCIL_PRE_CONTEXT |
No | false |
Enable pre-review AST + embedding context (true/1) |
AI_COUNCIL_PRE_CONTEXT_TOP_K |
No | 5 |
Number of top code units to inject |
AI_COUNCIL_PRE_CONTEXT_MAX_CHARS |
No | 1500 |
Max chars per code snippet in pre-context |
AI_COUNCIL_PRE_CONTEXT_TIMEOUT |
No | 15000 |
Timeout in ms for pre-context pipeline |
* At least one of OPENAI_API_KEY or GEMINI_API_KEY is required.
AI Providers
AI Council supports multiple AI providers for diverse perspectives.
OpenAI (Primary)
Used for most agents including Senior Developer, Security, Performance, Architect, and Pragmatist.
export OPENAI_API_KEY="sk-..."
Google Gemini
Powers the Gemini Structural Thinker agent, providing alternative reasoning and challenging assumptions.
export GEMINI_API_KEY="..."
Configure both providers for the most comprehensive reviews. The diversity of AI models leads to better coverage and catches more edge cases.
Model Selection
Customize which AI models power your council.
OpenAI Models
| Model | Speed | Cost | Quality |
|---|---|---|---|
gpt-4o-mini |
Fast | Low | Good (default) |
gpt-4o |
Medium | High | Excellent |
gpt-4-turbo |
Medium | High | Excellent |
Gemini Models
| Model | Speed | Cost | Quality |
|---|---|---|---|
gemini-2.0-flash |
Fast | Low | Good (default) |
gemini-1.5-pro |
Medium | Medium | Excellent |
Agents Overview
AI Council assembles a diverse panel of specialized AI agents.
Senior Developer
Generalist15+ years of experience. Focuses on maintainability, best practices, readability, testability, and long-term sustainability. Comments on any aspect of the code.
Security Engineer
SpecialistIdentifies security vulnerabilities, authentication issues, SQL injection, XSS, CSRF, and other security concerns. Abstains when code has no security implications.
Performance Engineer
SpecialistAnalyzes algorithmic complexity (O(n), O(log n)), memory usage, caching opportunities, and optimization potential. Abstains for code without performance implications.
Software Architect
SpecialistEvaluates design patterns, scalability, separation of concerns, system boundaries, and how code fits into the broader system. Abstains for simple changes.
Gemini Thinker
ChallengerPowered by Google Gemini. Challenges assumptions, reframes problems, identifies edge cases, and provides alternative system-level perspectives.
Pragmatist
GeneralistBalances perfectionism with shipping. Considers deadlines, team velocity, and whether the code effectively solves the problem. Sometimes "good enough" is right.
Voting System
Each agent casts a vote with a confidence score. The system uses these to reach a final decision.
Vote Types
Code meets quality standards and can be merged
Code needs improvements before merging
Code has fundamental issues that must be addressed
Agent defers due to low confidence or irrelevant domain
Confidence Scores
Each vote includes a confidence score from 0.0 to 1.0:
- 0.8-1.0: High confidence - agent is certain about their assessment
- 0.5-0.8: Moderate confidence - reasonable certainty
- < 0.5: Low confidence - agent automatically abstains
Abstention Rules
Agents abstain in two scenarios:
- Domain Mismatch: Specialist agents (Security, Performance, Architect) abstain when code doesn't match their domain expertise
- Low Confidence: Any agent with confidence below 0.5 automatically abstains
Debate System
When Gemini dissents from the majority, AI Council triggers an automated debate to explore the disagreement.
Debate Flow
- Detection: System detects Gemini's vote differs from majority
- Escalation: Debate is triggered between Gemini and the strongest majority voice
- Rebuttal: Majority agent responds to Gemini's concerns
- Counter: Gemini provides final response (maintain, partial concede, or full concede)
- Judgment: Judge considers the debate when making final decision
⚡ Debate escalation: Gemini (REJECT) dissents from majority (APPROVE)
1. senior-dev (DEFEND): The null check on line 42 handles the edge case
Gemini mentioned. The early return pattern is intentional for...
2. gemini (PARTIAL_CONCEDE): I acknowledge the null check, but maintain
concern about the async race condition that could still occur when...
Gemini uses a different AI model (Google's) than other agents. This intentional diversity helps catch edge cases and provides truly alternative perspectives that might be missed by a single AI provider.
Security & Sanitization
AI Council applies multiple layers of input and output sanitization to reduce injection and information-leakage risks.
User Input
All user-provided text is sanitized before use in AI prompts:
- Null bytes and control characters are stripped
- Shell command-substitution characters (backticks,
$() are removed - Line endings are normalized and whitespace is collapsed
- Input is capped at 10,000 characters
Agent Suggestions
Suggestion output from agents is sanitized before display:
- Control characters and null bytes are stripped
- Output is capped at 20,000 characters
- Suggestions are never executed by the tool—they are for human review only
Error Logging
API keys (OPENAI_API_KEY, GEMINI_API_KEY) are automatically redacted from any error messages written to stderr, preventing accidental credential leakage in logs or CI output.
Agent suggestions are AI-generated and not guaranteed correct or safe. Always review and validate suggestions before applying them to your codebase.
Need Help?
Can't find what you're looking for? Here are some resources: