AI-powered code review

CodeCanary

Catch bugs, security issues, and quality problems before they land in main. Runs on every push or locally from the terminal.

terminal
$ codecanary review

Loaded 1 project doc(s) for review context
Re-evaluating 2 unresolved thread(s) (base b4723757)...
Re-evaluating 2 unresolved thread(s)...

  [evaluate] internal/auth/session.go:42🟠 bugunchecked-error — code changes detected
  [skip]     internal/handler.go:87 nitpickunused-param — no code changes, no human replies

Triage result: 1 skipped, 0 auto-resolved, 1 need evaluation

  [resolved] internal/auth/session.go:42🟠 bugunchecked-error — fixed by code change
1 finding(s) resolved by code changes
Reviewing incremental changes (1 known issues excluded)...
No new findings
All clear! No issues remaining.

  ──────────────────────────────────────────────
  Model              Input   Output        Cost
  ──────────────────────────────────────────────
  claude-haiku-4-5     842      156     $0.0003
  ──────────────────────────────────────────────
  Duration: 1.8s
  PR size: +24/-8 lines, 3 files

$

Built for real workflows

Not another AI gimmick. A review tool that fits into how you already work.

GitHub Actions native

Runs as a composite action on every push. Posts inline comments on exact diff lines. Zero config after setup.

Incremental reviews

Go-driven triage classifies existing threads at zero LLM cost. Only changed code gets re-evaluated.

💬

Conversational

When authors reply to a finding, CodeCanary re-evaluates in context. It understands fixes, dismissals, and rebuttals.

🔒

Anti-hallucination

Explicit file allowlists, line validation against the diff, and distance thresholds prevent fabricated findings.

💰

Cost-efficient

Fast triage model for thread re-evaluation, full model for review. Tracks per-invocation usage so you see what you spend.

🔌

Multi-provider

Bring your own LLM: Anthropic, OpenAI, OpenRouter, or Claude CLI. No vendor lock-in. New providers are easy to add.

Auto-resolution

When code is fixed, threads are auto-resolved. No stale reviews cluttering your PRs.

💻

Local reviews

Review your changes from the terminal before pushing. Same engine, same findings, instant feedback.

📄

Configuration-as-code

Project-specific rules, severity levels, ignore patterns, and context. Checked into your repo alongside the code.

How it works

Three commands. Automated reviews from there.

1

Install

One curl command installs the codecanary binary. Supports Linux and macOS, amd64 and arm64.

2

Setup

The interactive wizard configures your provider, stores your API key in the system keychain, and creates the config file.

3

Review

Run locally for instant feedback, or merge the GitHub Actions workflow for automated reviews on every push.

Configuration-as-code

Define rules, context, and ignore patterns in .codecanary/config.yml. Checked into your repo.

.codecanary/config.yml
version: 1
provider: anthropic
review_model: claude-sonnet-4-6
triage_model: claude-haiku-4-5-20251001

context: |
  Go REST API using chi router. Tests use testify.

rules:
  - id: error-handling
    description: "Errors must be wrapped with context"
    severity: warning
    paths: ["**/*.go"]

  - id: sql-injection
    description: "Queries must use parameterized statements"
    severity: critical

ignore:
  - "vendor/**"
  - "*.lock"

Bring your own LLM

No vendor lock-in. Pick the provider that works for you.

Anthropic
Claude Sonnet, Haiku, Opus
OpenAI
GPT-5.4, GPT-5.4-mini, o3
OpenRouter
Any model via OpenRouter
Switching providers? Change two lines:
provider: openai
review_model: gpt-5.4
Adding a new provider →
Claude CLI
Use your logged-in session, no API key needed

Start reviewing in 30 seconds

Install, setup, review. That's it.

$ curl -fsSL https://codecanary.sh/install | sh

View on GitHub