v2.1.1 — Detect & Heal: The Industry's First Self-Healing Code Review

AI Code Review
That Fixes Its Own Findings

One-Click Heal

Detect issues → AI fixes → Auto-commit. Other tools just tell you what's wrong. OCR fixes it for you. Zero human intervention, end-to-end from scan to heal.

npm versionnpm downloadsGitHub stars

Detect → Heal Pipeline

From finding issues to fixing code, one command. Other tools just report. OCR fixes it.

Scan

Analyze AI-generated code with ocr scan

Detect

Find hallucinated packages, logic gaps, security issues

Analyze

Deep root-cause analysis with LLM reasoning

Heal

AI auto-generates fixes, dry-run preview, then apply

Clean Code

Zero human intervention, code ships clean

From finding issues to fixing code, one command

How OCR Works — 3 Steps to Clean Code

Step 1: Scan

Run ocr scan in your CI pipeline or locally. OCR analyzes your code across 3 levels (L1/L2/L3) and 6 languages.

Step 2: Detect

AI-powered detection finds:

  • Hallucinated packages & APIs
  • Logic gaps & dead code
  • Security vulnerabilities
  • AI-specific failure patterns

Step 3: Heal

Run ocr heal and let AI fix the issues:

  • Automatic fix generation with LLM reasoning
  • Dry-run mode to preview changes
  • One command to patch your codebase
  • Works with any LLM provider (GLM free!)
Result: Clean, production-ready code. Clean, production-ready code.

Add to Your CI in 30 Seconds

Works with GitHub Actions and GitLab CI

npx @opencodereview/cli@latest scan ./src
npx @opencodereview/cli@latest heal ./src

AI auto-fix detected issues — scan, heal, ship

GitHub Actions

# .github/workflows/ci.yml
- uses: raye-deng/open-code-review@v1
  with:
    threshold: 70
    paths: 'src/**/*.ts'
    fail-on-low-score: true

GitLab CI

# .gitlab-ci.yml
include:
  - component: open-code-review/validate@v1
    inputs:
      threshold: 70
      paths: src

Live Terminal Output

Real output from ocr scan and ocr heal commands

ocr scan — bash
$ ocr scan ./src --sla L3 --provider glm
╔══════════════════════════════════════════════════╗
║ Open Code Review v2.1.1 — L3 Deep Scan ║
╚══════════════════════════════════════════════════╝
📂 Scanning 47 files across 3 languages...
🔍 L1: Structural Analysis.............. ✓ 12 issues
🔍 L2: Embedding Recall (Ollama)....... ✓ 5 issues
🤖 L3: LLM Deep Scan (GLM-4.5-Air)... ✓ 3 issues
⏱️ Total: 8.2s
┌─────────────────────────────────────────────────┐
│ Quality Score: 72/100 │
│ ████████████████████░░░░░ 72% │
├──────────┬──────────┬──────────┬─────────────────┤
│ Complete │ Coherent │ Consist. │ Concise │
│ 78/100 │ 70/100 │ 68/100 │ 72/100 │
└──────────┴──────────┴──────────┴─────────────────┘
🔴 Critical (2):
• [C-001] Hallucinated package: @ai-ui/components
→ Line 5: import { SmartTable } from '@ai-ui/components'
→ Fix: No such package exists on npm registry
• [C-002] Undefined function: validateInput()
→ Line 23: const result = validateInput(formData)
→ Fix: Function declared but never implemented
🟡 Warning (3):
• [W-001] Empty catch block — Line 45
• [W-002] TODO marker without ticket — Line 67
• [W-003] Unused import: 'lodash' — Line 2
📋 Full report: ./ocr-report.json
📊 SARIF: ./ocr-report.sarif

Quality Score Dashboard

Quantified code quality across 4 dimensions

72/100
Quality Score
L3 Deep Scan — GLM-4.5-Air
Completeness78/100
Coherence70/100
Consistency68/100
Conciseness72/100
≥80 Good60-79 Fair<60 Poor

See OCR in Action

Before: AI-generated code with defects → After: OCR auto-fixed, zero defects

❌ Before — scan
// ❌ AI-generated code — 3 defects found
import { SmartForm } from '@ai-ui/smart-form'; // 🚨 hallucinated
import _ from 'lodash'; // ⚠️ unused
function handleSubmit(data: any) {
try {
const result = processData(data);
return result;
} catch (e) { // ⚠️ empty catch
// TODO: handle error later
}
}
ocr heal
— 2.1s —
✅ After — heal
// ✅ OCR auto-fixed — 0 defects
import { Form } from 'antd';
async function handleSubmit(data: FormData): Promise<void> {
try {
const result = await processData(data);
return result;
} catch (error) {
logger.error('Submit failed:', error);
throw new SubmitError('Form submission failed');
}
}

Why Existing Tools Aren't Enough

ESLint, SonarQube, and CodeClimate were built for human-written code. AI-generated code has fundamentally different failure modes.

Traditional Linters

  • Can't detect hallucinated npm packages
  • Miss logic gaps from context window limits
  • Don't understand AI code generation patterns
  • No feedback loop to AI assistants
  • Style-focused, not logic-focused

Open Code Review — Detect & Heal

  • Detects phantom packages, functions, and APIs
  • Catches logic discontinuities and dead code
  • Purpose-built for AI code failure modes
  • Self-heal: auto-fixes, not just reports
  • 0-100 quality score with dimensional breakdown

Core Capabilities

Detect & Heal — The Industry's First Self-Healing Code Review

🩹

Detect & Heal — Self-Healing Code Review

`ocr scan` detects AI code defects, `ocr heal` auto-generates fixes. Supports dry-run preview → confirm → auto-fix. Works with 8 LLM Providers (GLM/DeepSeek/OpenAI/Ollama etc.). One command, from detection to fix, zero human intervention.

👁️

Hallucination Detection

Catches phantom packages, undefined functions, and non-existent APIs that AI models confidently generate.

🧩

Logic Gap Analysis

Identifies empty catch blocks, unreachable code, TODO markers, and missing error handling from context window limits.

👻

Phantom Package DB

Real-time database of hallucinated npm/PyPI packages. Detects AI-fabricated dependencies before they break production.

📊

Quality Score (0-100)

Quantified scoring across 4 dimensions: completeness, coherence, consistency, and conciseness.

🚀

CI/CD Quality Gate

Block low-quality AI code from merging. Works with GitHub Actions and GitLab CI Components. SARIF output supported.

🔬

L3 Deep Scan — Foundation for Precise Fixes

Deep detection is the foundation for precise fixes. Suspicious code blocks sent to remote LLMs for thorough analysis. Supports 8 providers: OpenAI, GLM, DeepSeek, Ollama, and more. GLM/Ollama are free!

🌍

Multi-Language AI Detection

6 language-specific detectors: TypeScript/JavaScript, Python, Java, Go, Kotlin, Rust. Each language has its own AI defect pattern recognition for precise detection.

🔗

Universal Provider Adapter

OpenAI-compatible adapter for any LLM service. Built-in provider presets auto-fill baseUrl. Switch with --provider, --api-key, --model flags.

🎯

3-Tier SLA Scanning

L1 Fast, L2 Standard (Embedding + Local LLM), L3 Deep (Embedding + Remote LLM) — choose the depth your project needs. L3 is free for individuals (bring your own API key).

Real Teams, Real Results

See how teams use Open Code Review to ship AI-generated code with confidence.

AI Startup

AI Startup — Automated PR Review

Team uses Cursor + Copilot for daily coding. Every PR automatically runs ocr scan + ocr heal before merge.

Low-score code is auto-blocked before merge. AI auto-fixes common hallucinated packages and logic gaps. 70% less review time.

After using OCR, our PR review time dropped by 70%.

CTO, AI SaaS Startup

GitHub ActionsCursorTypeScript
🏦Enterprise

FinTech — AI Code Compliance

Regulations require all AI-generated code to go through review. OCR runs in CI to automatically scan every merge request.

Every MR generates a quality report. SARIF output integrates with GitLab Code Quality. Passed compliance audit on first attempt.

OCR helped us pass our compliance audit on the first try.

VP Engineering, FinTech Company

GitLab CIJavaPython
🐙Open Source

Open Source — Contribution Review

Maintainers use OCR to review community contributions. Automatically detects hallucinated packages and AI-generated code with defects.

Zero manual effort for contribution quality review. Suspicious PRs are auto-flagged. Maintainers only review genuine issues.

No more worrying about contributors submitting hallucinated code written by ChatGPT.

Maintainer, 5k★ Open Source Project

GitHub ActionsGoOpenAI

Free for Individuals

Full CLI + AI Self-Heal, $0 forever
Team plans from $19/seat/month

View Pricing →

Frequently Asked Questions

What is Open Code Review?

The industry's first self-healing AI code review tool. Detect hallucinations, logic gaps, then auto-fix them with AI. End-to-end from scan to heal, zero human intervention.

Is it really free?

Yes. The CLI and all local analysis features are free forever. GLM/Ollama can also be used for free with the self-heal feature. No credit card, no trial limits.

How is it different from ESLint?

Traditional linters just check code style. Open Code Review detects AI-specific issues — hallucinated packages, logic gaps — and doesn't just report them, it fixes them.

Which CI/CD platforms are supported?

GitHub Actions and GitLab CI Components are officially supported. Run locally via CLI too.

Does it work with Cursor or Copilot?

Yes. ocr heal auto-fixes code and generates IDE rule files so AI assistants write better code from the start.

What is Detect & Heal?

OCR's core differentiator: ocr scan detects issues, ocr heal auto-fixes them with AI. Other tools just tell you what's wrong — OCR fixes it for you. Supports 8 LLM providers, GLM is completely free.