Documentation

AI Council Documentation

Everything you need to know to get the most out of AI Council.

Installation

AI Council is distributed as an npm package with pre-compiled binaries for all major platforms.

Terminal
npm install -g @mugzie/ai-council

This automatically installs the correct binary for your platform:

  • macOS - darwin-arm64 (Apple Silicon) or darwin-x64 (Intel)
  • Linux - linux-arm64 or linux-x64
  • Windows - win32-x64
💡
Verify Installation

Run ai-council --help to verify the installation was successful.

Claude Code CLI & Agent SDK (Optional)

To use Agent Review mode (--agent), you need the Claude Code CLI and the Claude Agent SDK.

Install Claude Code CLI
npm install -g @anthropic-ai/claude-code
Install Claude Agent SDK
npm install @anthropic-ai/claude-agent-sdk

You will also need an Anthropic API key. Set it in your environment:

Bash / Zsh
export ANTHROPIC_API_KEY="sk-ant-..."
💡
Not required for standard reviews

Claude Code and the Agent SDK are only needed for --agent mode. All other review modes (review, security, perf, arch, sanity, --deep-analysis) work with just an OpenAI and/or Gemini API key.

License Setup

AI Council requires a valid license key to operate. Purchase your license from our store.

Purchase License

Once you have your license key, set it as an environment variable:

Bash / Zsh
export AI_COUNCIL_LICENSE_KEY="your-license-key"
Add to shell profile (~/.zshrc or ~/.bashrc)
# AI Council License
export AI_COUNCIL_LICENSE_KEY="your-license-key"

Quick Start

Get up and running with AI Council in under a minute.

1

Install the package

npm install -g @mugzie/ai-council
2

Set your license key

export AI_COUNCIL_LICENSE_KEY="your-key"
3

Configure AI provider (at least one required)

# OpenAI (recommended)
export OPENAI_API_KEY="sk-..."

# Or Google Gemini
export GEMINI_API_KEY="..."

# Or both for best results!
4

Run your first review

ai-council review --diff --branch=main --pretty

Basic Usage

AI Council analyzes your git diff and provides recommendations from multiple AI perspectives.

Basic Review
# Review changes against main branch
ai-council review --diff --branch=main --pretty

# Review changes against develop branch
ai-council review --diff --branch=develop --pretty

# Review only staged changes
ai-council review --staged --pretty

# Review only unstaged changes
ai-council review --unstaged --pretty

# Review both staged and unstaged changes
ai-council review --all --pretty

Git Targeting Options

AI Council provides flexible options for targeting specific commits, branches, and commit ranges for review.

Flag Description Example
--branch=<name> Compare current HEAD against a branch --branch=main
--commit=<hash> Review a specific commit --commit=abc123
--range=<from>..<to> Review a range of commits --range=main~5..main
--review-branch=<name> Review all commits on a branch vs its merge-base --review-branch=feature
--base=<name> Base branch for --review-branch (default: main) --base=develop
--staged Review only staged changes (git diff --cached) --staged
--unstaged Review only unstaged changes (git diff) --unstaged
--all Review both staged and unstaged changes (git diff HEAD) --all
Git Targeting Examples
# Review staged/unstaged git changes (auto-detect)
ai-council review --pretty

# Explicitly review only staged changes
ai-council review --staged --pretty

# Explicitly review only unstaged changes
ai-council review --unstaged --pretty

# Review both staged and unstaged changes
ai-council review --all --pretty

# Review current HEAD vs main branch
ai-council review --branch=main --pretty

# Review a specific commit
ai-council review --commit=abc123 --pretty

# Review the last 5 commits on main
ai-council review --range=main~5..main --pretty

# Review all commits on a feature branch
ai-council review --review-branch=feature --pretty

# Review feature branch against develop (instead of main)
ai-council review --review-branch=feature --base=develop --pretty
💡
Priority Order

When multiple git options are provided, they are processed in this order: --commit > --range > --review-branch > --branch. If none are specified, working-tree changes are reviewed with a waterfall fallback: staged → unstaged → HEAD. Use --staged, --unstaged, or --all to override the auto-detect.

💡
Remote Branch Fallback

When using --branch or --review-branch, if the branch doesn't exist as a local ref, AI Council automatically tries origin/<branch>. This is useful in CI environments or shallow clones where only remote tracking branches are available.

Output Formats

Control how results are displayed:

Flag Description
--pretty Human-readable formatted output with colors and emojis
--json JSON output for programmatic use
--ci CI mode - exits with code 1 on REJECT or low confidence
--suggestions Ask agents for code fix examples when they vote REVISE or REJECT (see Fix Suggestions)
--pre-context Enrich reviewer prompts with AST-extracted, embedding-ranked code context (see Pre-Review Context)
--pre-context-top-k=<n> Number of top code units to inject (implies --pre-context, default: 5)
--deep-analysis File-by-file deep review with structured findings, project-rule awareness, and line-level detail (see Deep Analysis)
--agent Two-pass Claude Agent SDK review with codebase exploration; implies --deep-analysis (see Agent Review)

Review Commands

AI Council offers specialized review modes that focus on specific aspects of your code.

ai-council review

Comprehensive code review with all agents: Senior Developer, Security, Performance, Pragmatist, and Gemini.

ai-council review --diff --pretty
ai-council security

Security-focused review examining vulnerabilities, authentication, and secure coding practices.

ai-council security --branch=main --pretty
ai-council perf

Performance analysis focusing on algorithmic complexity, bottlenecks, and optimization opportunities.

ai-council perf --commit=abc123 --pretty
ai-council arch

Architecture review evaluating design patterns, scalability, and system structure.

ai-council arch --review-branch=feature --pretty
ai-council sanity

Quick sanity check with Pragmatist, Senior Dev, and Architect for a balanced perspective.

ai-council sanity --range=main~3..main --pretty
ai-council decide

Advanced tool for custom queries. Choose your agent set and provide a custom question for the council to evaluate.

ai-council decide --agents=security --branch=main --question="Is this auth flow secure?"

Interactive Chat

Run an interactive chat session where you can ask the council multiple questions in a row. Choose an agent set once, then type questions and see verdicts with rationale and agent votes.

Start chat (then choose agent set 1–5)
ai-council chat

With git context (same options as review commands):

Chat with git context
ai-council chat --branch=main
ai-council chat --commit=abc123
ai-council chat --range=main~5..main
ai-council chat --review-branch=feature

At startup you choose the agent set: review, security, perf, arch, or sanity. Then type your question and press Enter. Type exit or quit to end the session.

You can request fix suggestions per-message by including phrases like "with suggestions", "fix examples", or "code examples" in your question—or the AI can infer when you want them. Use --suggestions to always request them. See Fix Suggestions for full details.

💡
Use case

Chat is useful for iterating on the same codebase or branch—e.g. "Should we cache this?" then "What if we add a TTL?"—without re-running a full review each time.

Fix Suggestions

When agents vote REVISE or REJECT, they can optionally provide concrete code fix examples. Suggestions are syntax-highlighted and prettified in the terminal, covering JavaScript, TypeScript, Python, Go, Rust, Java, Ruby, PHP, SQL, Shell, and more.

CLI Usage

Add --suggestions to any review command:

Review with suggestions
# Review with fix examples from dissenting agents
ai-council review --diff --branch=main --suggestions

# Security review with suggestions
ai-council security --branch=main --suggestions

# Chat with suggestions always on
ai-council chat --suggestions

When --suggestions is used with review commands, the output switches from raw JSON to a human-readable format showing the verdict, rationale, agent votes, and fix suggestions in styled boxes.

Chat Mode

In chat mode, you can request suggestions in three ways:

  1. --suggestions flag — always request suggestions for every turn.
  2. Trigger phrases — include phrases like "with suggestions", "fix examples", "code examples", or "with examples" in your message.
  3. AI inference — the tool can automatically detect when your message implies you want fix examples (e.g. "how would I fix this?") and request them for that turn.
⚠️
AI-Generated

Suggestions are AI-generated, sanitized for display only, and never executed by the tool. They are not guaranteed correct or safe—always review and validate before applying.

MCP / Programmatic

All MCP tools accept an optional suggestions boolean parameter. When true, agents include a "suggestion" field in their vote when they recommend REVISE or REJECT.

Suggestion Rate Limiting (Chat)

When no explicit flag or trigger phrase is used, the AI classifier determines whether to request suggestions. This classifier is rate-limited to prevent excessive API calls. Configure via environment variables:

Variable Default Description
AI_COUNCIL_SUGGESTION_CLASSIFY_MAX 20 Maximum classify calls per window
AI_COUNCIL_SUGGESTION_CLASSIFY_WINDOW_SECONDS 60 Window duration in seconds
💡
Tip

If chat reports a suggestion rate limit, either increase the env vars above or use --suggestions to always request them.

Pre-Review Context

An opt-in preprocessing stage that runs before the LLM reviewers, extracting and ranking the most relevant code units from your diff or input to give agents better context.

How It Works

  1. AST Extraction — Uses tree-sitter to parse TypeScript/JavaScript into an AST and extract semantic code units (functions, classes, methods, interfaces, type aliases). Falls back to line-based chunking for unsupported languages.
  2. Embedding Ranking — Each extracted unit is embedded using CodeBERT and ranked by cosine similarity to the review question.
  3. Context Injection — The top-k most relevant code units are injected into each reviewer agent's prompt, giving them focused structural context beyond the raw diff.

The entire pipeline is fail-open: if AST extraction, embedding, or any step fails or times out, the review proceeds normally without enriched context.

CLI Usage

Enable pre-review context
# Enable with default settings (top 5 units)
ai-council review --branch=main --pre-context --pretty

# Customize the number of code units injected
ai-council review --branch=main --pre-context-top-k=10 --pretty

# Works with all review types
ai-council security --branch=main --pre-context --pretty
ai-council arch --review-branch=feature --pre-context --pretty

MCP / Programmatic

All MCP tools accept:

  • preContext (boolean) — enable the AST + embedding pipeline.
  • preContextTopK (number) — number of top code units to inject (implies preContext, default: 5).

Configuration

Fine-tune the pipeline via environment variables:

Variable Default Description
AI_COUNCIL_PRE_CONTEXT false Enable pre-review context globally (true or 1)
AI_COUNCIL_PRE_CONTEXT_TOP_K 5 Number of top code units to inject
AI_COUNCIL_PRE_CONTEXT_MAX_CHARS 1500 Max characters per code snippet
AI_COUNCIL_PRE_CONTEXT_TIMEOUT 15000 Timeout in ms for the entire pipeline
💡
When to use

Pre-review context is most useful for large diffs where agents might otherwise miss important structural relationships. For small, focused diffs the raw diff alone is usually sufficient.

Deep Analysis

A file-by-file review pipeline that produces structured, line-specific findings instead of the standard single-prompt council vote. Each file in the diff is reviewed individually by a dedicated Deep Reviewer agent, then results are aggregated by the Judge.

How It Works

  1. Diff Parsing — The unified diff is parsed into per-file patches. Binary files, snapshots, .d.ts, dist/, sourcemaps, and minified files are automatically skipped.
  2. Full-File Context — For each changed file, the full post-change file content is read from disk and included alongside the patch so the LLM has surrounding context, not just hunks (capped at 60k chars per file).
  3. Parallel File-Level Review — Files are reviewed in parallel batches (8 concurrent) by the Deep Reviewer agent. Each review returns structured JSON with a summary, recommendation (APPROVE / REVISE / REJECT), confidence score, and an array of findings. If JSON parsing fails, a retry prompt is sent automatically.
  4. Concrete Code Findings — Every finding includes the file path, line number, severity, issue description, plus the exact problematicCode and a concrete recommendedCodeChange showing the fix.
  5. Aggregation — The Judge agent reviews all per-file results and produces a single overall recommendation, weighting high-severity findings heavily.

Severity Levels

Severity Meaning Examples
high Bug, security hole, or data-loss risk Unvalidated input, race condition, null dereference
medium Logic issue, missing edge case, testability problem Off-by-one error, unchecked promise, missing error handling
low Minor improvement or readability concern Naming, dead code, redundant cast

CLI Usage

Run deep analysis
# Deep file-by-file review against main
ai-council review --branch=main --deep-analysis --pretty

# Combine with suggestions for fix examples
ai-council review --branch=main --deep-analysis --suggestions --pretty

# Works with all git targeting options
ai-council review --staged --deep-analysis --pretty
ai-council review --review-branch=feature --deep-analysis --pretty

When --deep-analysis is used, the output includes a structured findings table showing each file reviewed, its recommendation, and line-level issues with severity and suggestions.

Project-Rule Injection

AI Council automatically discovers and injects project-level rules and conventions into all review prompts (both standard and deep analysis). Rules are loaded from these locations in your repository root:

  • .cursorrules
  • .cursor/rules/ (all files, recursive)
  • .claude/rules/ (all files, recursive)

The combined rules text is capped at 8,000 characters to avoid blowing up context windows. If no rule files are found, the review proceeds normally without injected rules.

💡
Tip

If your project already has .cursorrules or .cursor/rules/ files, AI Council picks them up automatically—no extra configuration needed.

MCP / Programmatic

All MCP tools accept an optional deepAnalysis boolean parameter. When true, the review is routed through the file-by-file deep analysis pipeline and the result includes fileReviews and findings arrays.

⚠️
Cost consideration

Deep analysis sends one LLM request per file in the diff, plus an aggregation request. For large diffs with many files, this can significantly increase API usage compared to the standard single-prompt review.

Agent Review

A two-pass review pipeline powered by the Claude Agent SDK that gives Claude full codebase access to trace callers, read files, check tests, and find dead code—not just review the diff.

How It Works

  1. Pass 1 — Surface Scan — All diffs are fed to Claude with no tool access. Claude produces a high-level overview, a priority list of files needing scrutiny, and any immediate risks visible from the diffs alone.
  2. Pass 2 — Deep File Analysis — For each changed file, a new session is forked from Pass 1 (inheriting that context) and Claude is given Read, Grep, and Glob tool access. Claude actively explores the codebase: reads full files, searches for callers of changed exports, traces data flow, checks tests, and validates error handling. Reviews run in parallel batches (8 concurrent).
  3. Aggregation — Identical to Deep Analysis: the Judge aggregates all per-file results into a single recommendation.

CLI Usage

Run agent review
# Two-pass agent review against main
ai-council review --agent --branch=main --pretty

# Agent review of staged changes
ai-council review --agent --staged --pretty

# Combine with suggestions
ai-council review --agent --branch=main --suggestions --pretty

The --agent flag implies --deep-analysis, so the output uses the same structured findings format with per-file reviews, severity levels, and concrete code suggestions.

Prerequisites

Agent review requires two additional dependencies:

  1. Claude Agent SDK — Install with npm install @anthropic-ai/claude-agent-sdk
  2. Claude Code CLI — See the Anthropic docs for installation

If the SDK is not installed, the command fails with a clear error message and installation instructions.

MCP / Programmatic

All MCP tools accept an optional agent boolean parameter. When true, the review is routed through the two-pass Claude Agent SDK pipeline (implies deepAnalysis).

⚠️
Cost & latency

Agent review is the most thorough but also the most expensive mode. Each file gets a multi-turn Claude session with tool use (up to 15 turns per file). Use it for critical reviews where codebase-aware analysis justifies the cost.

💡
When to use

Agent review excels when changes affect exports consumed by other files, modify shared utilities, or touch code with complex call chains. For self-contained changes, Deep Analysis is faster and cheaper.

Utility Commands

AI Council provides helpful utility commands for setup, debugging, and information.

ai-council tools

Lists all available MCP tools with their parameters. Useful for understanding what's available when integrating with Claude or Cursor.

ai-council tools

Shows: review, security, perf, arch, sanity (analysis tools) and decide (advanced custom queries)

ai-council test

Tests API connectivity for all configured providers. Verifies your API keys are valid and working.

ai-council test
Example Output
🔍 Testing API connectivity...

✅ OpenAI API: Connected (gpt-4o-mini)
✅ Gemini API: Connected (gemini-2.0-flash)

All APIs operational!
ai-council chat

Start an interactive chat session: choose an agent set, then ask questions and get verdicts in a loop. Supports the same git options as review (--branch, --commit, --range, --review-branch). Type exit or quit to end.

ai-council chat
ai-council chat --branch=main

See Interactive Chat for full details.

ai-council --version

Display the installed version number.

ai-council --version
ai-council -v
ai-council --help

Comprehensive help with all commands, MCP tools, environment variables, and usage examples.

ai-council --help
💡
Troubleshooting Tip

If you're getting errors, run ai-council test first to verify your API keys are configured correctly and the connections are working.

CI/CD Mode

Integrate AI Council into your continuous integration pipeline to automatically review pull requests.

GitHub Actions Example
name: AI Council Review

on:
  pull_request:
    branches: [main, develop]

jobs:
  review:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0
      
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
      
      - name: Install AI Council
        run: npm install -g @mugzie/ai-council
      
      - name: Run Code Review
        env:
          AI_COUNCIL_LICENSE_KEY: ${{ secrets.AI_COUNCIL_LICENSE_KEY }}
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          # Optional: Add Gemini for diverse AI perspectives
          GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}
          # Optional: Customize models (defaults shown)
          AI_COUNCIL_MODEL: gpt-4o-mini
          AI_COUNCIL_GEMINI_MODEL: gemini-2.0-flash
        run: ai-council review --diff --branch=origin/main --ci

Adding Gemini

For the most comprehensive reviews, add your Gemini API key to enable the Gemini Structural Thinker agent. This provides diverse AI perspectives and helps catch edge cases that a single provider might miss.

Customizing Models

You can customize which models power your council by setting environment variables:

Variable Default Description
AI_COUNCIL_MODEL gpt-4o-mini OpenAI model for most agents. Use gpt-4o for higher quality reviews.
AI_COUNCIL_GEMINI_MODEL gemini-2.0-flash Gemini model for the Gemini Thinker. Use gemini-1.5-pro for deeper analysis.
💡
Cost vs Quality

The default models (gpt-4o-mini and gemini-2.0-flash) offer a good balance of speed and cost. For critical codebases, consider using gpt-4o and gemini-1.5-pro for higher quality reviews.

⚠️
Exit Codes

AI Council exits with code 1 when overall confidence is below 0.65. In CI mode (--ci), it also exits 1 when the final decision is REJECT, or when the decision is REVISE and confidence is below 0.75. Use this to block merges in pipelines.

MCP Server

AI Council includes a Model Context Protocol (MCP) server for integration with AI assistants like Claude and Cursor.

Start MCP Server
ai-council mcp

The MCP server exposes AI Council's functionality as tools that can be invoked by AI assistants, enabling them to request code reviews during conversations.

All analysis tools accept optional parameters:

  • gitScope — which working-tree changes to include: "auto" (default; staged → unstaged → HEAD fallback), "staged", "unstaged", or "all".
  • suggestions — when true, agents include code fix examples in their vote when they recommend REVISE or REJECT. See Fix Suggestions.
  • preContext — when true, enrich reviewer prompts with AST-extracted, embedding-ranked code context. See Pre-Review Context.
  • preContextTopK — number of top code units to inject (implies preContext, default: 5).
  • deepAnalysis — when true, use file-by-file deep review with structured findings and project-rule awareness. See Deep Analysis.
  • agent — when true, use two-pass Claude Agent SDK review with codebase exploration (implies deepAnalysis). See Agent Review.
  • cwd — working directory for git commands (defaults to process cwd). Useful when the assistant is not running in the repo root.

Claude Integration

Add AI Council to Claude Code for seamless code review within your Claude conversations.

Add to Claude Code
claude mcp add --transport stdio ai-council -- ai-council mcp

Once configured, Claude can invoke AI Council to review code snippets or git diffs during your conversation.

Cursor Integration

Configure AI Council as an MCP server in Cursor for integrated code review.

.mcp.json
{
  "mcpServers": {
    "ai-council": {
      "command": "ai-council",
      "args": ["mcp"],
      "env": {
        "AI_COUNCIL_LICENSE_KEY": "your-license-key",
        "OPENAI_API_KEY": "your-openai-key",
        "GEMINI_API_KEY": "your-gemini-key",
        "AI_COUNCIL_MODEL": "gpt-4o-mini",
        "AI_COUNCIL_GEMINI_MODEL": "gemini-2.0-flash"
      }
    }
  }
}
💡
Tip

Place the .mcp.json file in your project root or home directory for Cursor to automatically detect it.

Environment Variables

Configure AI Council's behavior through environment variables.

Variable Required Default Description
AI_COUNCIL_LICENSE_KEY Yes - Your license key from the store
OPENAI_API_KEY No* - OpenAI API key for GPT models
GEMINI_API_KEY No* - Google Gemini API key
AI_COUNCIL_MODEL No gpt-4o-mini OpenAI model to use
AI_COUNCIL_GEMINI_MODEL No gemini-2.0-flash Gemini model to use
AI_COUNCIL_SUGGESTION_CLASSIFY_MAX No 20 Max suggestion-classify calls per window in chat
AI_COUNCIL_SUGGESTION_CLASSIFY_WINDOW_SECONDS No 60 Window in seconds for the above
AI_COUNCIL_PRE_CONTEXT No false Enable pre-review AST + embedding context (true/1)
AI_COUNCIL_PRE_CONTEXT_TOP_K No 5 Number of top code units to inject
AI_COUNCIL_PRE_CONTEXT_MAX_CHARS No 1500 Max chars per code snippet in pre-context
AI_COUNCIL_PRE_CONTEXT_TIMEOUT No 15000 Timeout in ms for pre-context pipeline

* At least one of OPENAI_API_KEY or GEMINI_API_KEY is required.

AI Providers

AI Council supports multiple AI providers for diverse perspectives.

OpenAI (Primary)

Used for most agents including Senior Developer, Security, Performance, Architect, and Pragmatist.

export OPENAI_API_KEY="sk-..."

Google Gemini

Powers the Gemini Structural Thinker agent, providing alternative reasoning and challenging assumptions.

export GEMINI_API_KEY="..."
Best Practice

Configure both providers for the most comprehensive reviews. The diversity of AI models leads to better coverage and catches more edge cases.

Model Selection

Customize which AI models power your council.

OpenAI Models

Model Speed Cost Quality
gpt-4o-mini Fast Low Good (default)
gpt-4o Medium High Excellent
gpt-4-turbo Medium High Excellent

Gemini Models

Model Speed Cost Quality
gemini-2.0-flash Fast Low Good (default)
gemini-1.5-pro Medium Medium Excellent

Agents Overview

AI Council assembles a diverse panel of specialized AI agents.

👨‍💻

Senior Developer

Generalist

15+ years of experience. Focuses on maintainability, best practices, readability, testability, and long-term sustainability. Comments on any aspect of the code.

🔒

Security Engineer

Specialist

Identifies security vulnerabilities, authentication issues, SQL injection, XSS, CSRF, and other security concerns. Abstains when code has no security implications.

Performance Engineer

Specialist

Analyzes algorithmic complexity (O(n), O(log n)), memory usage, caching opportunities, and optimization potential. Abstains for code without performance implications.

🏛️

Software Architect

Specialist

Evaluates design patterns, scalability, separation of concerns, system boundaries, and how code fits into the broader system. Abstains for simple changes.

🤖

Gemini Thinker

Challenger

Powered by Google Gemini. Challenges assumptions, reframes problems, identifies edge cases, and provides alternative system-level perspectives.

🛠️

Pragmatist

Generalist

Balances perfectionism with shipping. Considers deadlines, team velocity, and whether the code effectively solves the problem. Sometimes "good enough" is right.

Voting System

Each agent casts a vote with a confidence score. The system uses these to reach a final decision.

Vote Types

APPROVE

Code meets quality standards and can be merged

REVISE

Code needs improvements before merging

REJECT

Code has fundamental issues that must be addressed

ABSTAIN

Agent defers due to low confidence or irrelevant domain

Confidence Scores

Each vote includes a confidence score from 0.0 to 1.0:

  • 0.8-1.0: High confidence - agent is certain about their assessment
  • 0.5-0.8: Moderate confidence - reasonable certainty
  • < 0.5: Low confidence - agent automatically abstains

Abstention Rules

Agents abstain in two scenarios:

  1. Domain Mismatch: Specialist agents (Security, Performance, Architect) abstain when code doesn't match their domain expertise
  2. Low Confidence: Any agent with confidence below 0.5 automatically abstains

Debate System

When Gemini dissents from the majority, AI Council triggers an automated debate to explore the disagreement.

Debate Flow

  1. Detection: System detects Gemini's vote differs from majority
  2. Escalation: Debate is triggered between Gemini and the strongest majority voice
  3. Rebuttal: Majority agent responds to Gemini's concerns
  4. Counter: Gemini provides final response (maintain, partial concede, or full concede)
  5. Judgment: Judge considers the debate when making final decision
Example Debate Output
⚡ Debate escalation: Gemini (REJECT) dissents from majority (APPROVE)

1. senior-dev (DEFEND): The null check on line 42 handles the edge case
   Gemini mentioned. The early return pattern is intentional for...

2. gemini (PARTIAL_CONCEDE): I acknowledge the null check, but maintain
   concern about the async race condition that could still occur when...
💡
Why Gemini?

Gemini uses a different AI model (Google's) than other agents. This intentional diversity helps catch edge cases and provides truly alternative perspectives that might be missed by a single AI provider.

Security & Sanitization

AI Council applies multiple layers of input and output sanitization to reduce injection and information-leakage risks.

User Input

All user-provided text is sanitized before use in AI prompts:

  • Null bytes and control characters are stripped
  • Shell command-substitution characters (backticks, $() are removed
  • Line endings are normalized and whitespace is collapsed
  • Input is capped at 10,000 characters

Agent Suggestions

Suggestion output from agents is sanitized before display:

  • Control characters and null bytes are stripped
  • Output is capped at 20,000 characters
  • Suggestions are never executed by the tool—they are for human review only

Error Logging

API keys (OPENAI_API_KEY, GEMINI_API_KEY) are automatically redacted from any error messages written to stderr, preventing accidental credential leakage in logs or CI output.

⚠️
Important

Agent suggestions are AI-generated and not guaranteed correct or safe. Always review and validate suggestions before applying them to your codebase.

Need Help?

Can't find what you're looking for? Here are some resources: