Revieko — Review Playbooks (PR)

This document is a set of “what to do” playbooks for the most common situations Revieko flags:

  • high structural risk (high struct),

  • high control risk (high control),

  • effectful tails (IO/NET/DB/EXEC/LOG),

  • USER_INPUT → (NET/DB/EXEC/LOG) adjacency,

  • partial analysis due to limits/fallbacks.

The goal is not to replace review, but to quickly choose the right reaction:

  • Accept + explain

  • Simplify

  • Isolate

  • Add tests / invariants

  • Split PR / rerun with full analysis

Read first: docs/PR_REVIEW_QUICKSTART.md
Field reference: docs/REPORT_FIELDS_REFERENCE.md

0) General response principles

0.1 Revieko signals are not a verdict — they are an attention map

  • High risk does not mean the code is wrong.

  • Low risk does not mean the code is safe or bug-free.

  • The point of the report is to give you 3–10 places to inspect first.

0.2 Three core reviewer actions (and when to use them)

  • Simplify — when complexity appeared “along the way” with no clear architectural reason.

  • Isolate — when the change looks like a new subsystem/pattern/algorithm.

  • Accept + explain — when complexity is justified, but must be stated and recorded (contract/test/comment).

0.3 If invariant_violations exist — treat it as a hard signal

If team rules (semantics/invariants) are enabled, violations should be treated as:

  • “fix it”, or

  • “explicitly exempt/override the rule”,

but never silently ignored.

1) Pick the playbook fast (30 seconds)

Look at the Summary:

  • analysis=partialPlaybook: Partial analysis (scope)

  • risk_level=high and hotspots are mostly struct_pattern_break / depth_spikePlaybook: High struct

  • control_risk_level=high or hotspots include control_outlierPlaybook: High control

  • Hotspots show Effects/Taint, and JSON includes effect_density_tail / dangerous_flow_score

    • if the tail is “densely effectful” → Playbook: Effect tails

    • if USER_INPUT appears near NET/DB/EXEC/LOGPlaybook: USER_INPUT → Effects

2) Playbook: High struct (structural anomaly)

Signals

  • risk_level=high and/or high struct_risk

  • hotspots are mostly:

    • struct_pattern_break

    • depth_spike

    • mixed (when shape/depth dominates)

  • control_risk_level can be low — that’s normal: “shape is atypical, branching isn’t overheated”.

Risk you are reducing

  • increased cognitive load,

  • “hidden architecture” inside a file,

  • atypical patterns introduced without a contract (hard to maintain later).

Fast check (60 seconds)

  • Which file is the main contributor in Per-file structural risk?

  • In the top-3 hotspots, what was introduced?

    • new entities (class/subsystem/mini-DSL),

    • generators / yield,

    • lambdas / functional constructs,

    • atypical math/algorithm,

    • sharp nesting growth.

  • Decide: feature architecture or accidental complexity?

Actions

A) Accept + explain (complexity is justified)

Use when:

  • the change adds real value (new algorithm/mode),

  • there are tests/contract,

  • the area is localized, readable, and explained.

  • add a short comment explaining why the pattern is needed,

  • add test(s) for contract/edge cases,

  • if it’s a “mode” — add an explicit flag/config and document it.

PR comment template:
“CodeGuard flagged a structural pattern break in <file:lines>. This is intentional: <why>. Added tests for <what> / updated the contract comment.”

B) Simplify (reduce architectural mass)

Use when:

  • the pattern is “clever” but not necessary,

  • it can be expressed more simply and closer to the repo style.

Simplification techniques:

  • replace lambdas with named functions,

  • remove generators: use a simple next_* function,

  • reduce nesting (guard clauses, early returns),

  • separate “computation” from “effects” (IO separately),

  • make algorithm steps explicit (variables, names).

PR comment template:
“This introduces a structural pattern break relative to the repo norm. Let’s rewrite it in a flatter/more imperative way (without <lambda/yield/...>) to reduce cognitive load.”

C) Isolate (new subsystem/pattern)

Use when:

  • a mini-architecture landed inside the file (strategy/state/manager),

  • a new reusable algorithm appears,

  • the change introduces a new operating mode and needs an explicit interface.

  • extract into a dedicated module/class,

  • provide an explicit API,

  • add minimal docs/README for the module,

  • add tests for the interface, not internals.

PR comment template:
“This looks like a new subsystem. Let’s extract it into a separate module with an explicit interface and tests — it will keep the change localized and easier to understand.”

3) Playbook: High control (overheated branching / error paths / return tails)

Signals

  • control_risk_level=high (even if struct_risk is medium/low)

  • hotspots of type control_outlier

  • in control_summary high values:

    • branch_cond_risk

    • error_path_risk

    • return_path_risk

Signal amplifier: high _effect_density

  • branch_cond_effect_density

  • error_path_effect_density

  • return_path_effect_density

Risk you are reducing

  • mistakes due to complex conditions,

  • “programs inside error handling”,

  • unexpected side effects right before exit (before return/raise),

  • poor testability and unpredictable behavior.

Fast check (60 seconds)

  • Which axis is overheated?

    • BRANCH_COND: conditions are too complex

    • ERROR_PATH: error handling got complex

    • RETURN_PATH: logic/effects happen right before return/raise

  • Are there effects inside those paths? If yes — priority #1.

Refactoring recipes (by axis)

A) Overheated conditions (BRANCH_COND)

Typical issues:

  • long if with computation + IO,

  • elif chains as a mini state machine,

  • complex unnamed conditions.

What to do:

  • extract predicates into named functions (is_*, should_*),

  • break into guard clauses,

  • replace elif chains with a decision table/state map,

  • move effects (IO/NET/DB) out of condition expressions.

PR comment:
“Control signals show overheated conditions. Let’s extract predicates into named helpers and remove effects from condition checks — it’ll reduce error risk and improve readability.”

B) Complex error path (ERROR_PATH)

Typical issues:

  • except does half the business logic,

  • many effects inside error handlers,

  • mixed compensations/logging/retries.

What to do:

  • minimize work in except: capture context → call handle_error(...),

  • unify error handling (one coherent path),

  • make retry/backoff policies explicit,

  • add tests for failure scenarios (mandatory if behavior changes).

PR comment:
“Error-path has become complex. Let’s extract error handling into dedicated functions/modules so except doesn’t turn into a program inside a program.”

C) Heavy return/raise tails (RETURN_PATH)

Typical issues:

  • effects before return (logging/writes/network),

  • tail changes the function contract “at exit”.

What to do:

  • split: compute result → perform effects → return,

  • document the exit contract (what’s guaranteed before returning),

  • add tests for “exit contract”.

PR comment:
“Return-path now contains heavy logic/effects. That’s risky because it changes the exit contract. Let’s separate computation from effects and cover the contract with tests.”

4) Playbook: Effect tails (dense tail with NET/DB/FILE/EXEC/LOG)

Signals

  • high effect_density_tail in JSON/Full report (per file or global)

  • TL;DR hints like “Tail contains NET/DB/FILE effects…”

  • hotspots include Effects (e.g., NET_IO+DB+LOG) without explicit Taint

Risk you are reducing

  • complex, poorly testable effect-heavy endings of functions/modules,

  • hidden dependency on operation order,

  • implicit transactional behavior (partial success),

  • instability under errors (partial state).

Fast check (2 minutes)

  • What effects are in the tail?

    • DB / NET_IO / EXEC — highest priority

    • LOG — depends, but high volume can become noise

  • Is there a clear boundary “pure → effects”?

  • What happens if an error occurs mid-tail?

  • Is there idempotency / transactional behavior / compensations?

Recommended actions

  • extract effects into a dedicated layer/function (persist_*, send_*, exec_*)

  • make operation order explicit (and document it)

  • add tests:

    • partial failure handling,

    • reruns (idempotency),

    • timeouts/retries (for network).

PR comment:
“IO/NET/DB effects are concentrated in the tail of this file/function. Let’s extract an ‘effects layer’ and add tests for failures/idempotency so behavior is predictable.”

5) Playbook: USER_INPUT → (NET/DB/EXEC/LOG) (suspicious adjacency)

Signals

  • dangerous_flow_score > 0

  • hotspot has taint_hint=USER_INPUT and effect_hint includes NET_IO/DB/EXEC/LOG

  • TL;DR hints like “USER_INPUT near NET/DB/LOG…”

Risk you are reducing

This isn’t full dataflow analysis, but the practical risk is clear:

  • input may reach network/DB/exec/log without proper validation,

  • risk of injections/leaks/log spam/unexpected side effects.

Fast check (2–3 minutes)

  • Where is the trust boundary?

    • validation, normalization, limits, escaping

  • Is there an allowlist (params/commands/fields)?

  • Is user input logged? If yes — any PII/secrets risk?

  • For DB/NET/EXEC: is there an explicit layer responsible for safety/contracts?

Recommended actions (by effect type)

A) USER_INPUT → DB

  • parameterized queries / ORM

  • strict validation schema

  • length/format constraints

  • tests for “bad” inputs

B) USER_INPUT → NET_IO

  • allowlist domains/endpoints

  • timeouts/retries policy

  • payload size limits

  • logging without leaks

C) USER_INPUT → EXEC

  • forbid or enforce a strict allowlist

  • never build shell commands via string concatenation

  • dedicated “executor” layer covered by tests

D) USER_INPUT → LOG

  • masking/redaction

  • volume limits

  • exclude secrets/PII

PR comment:
“There’s a place where USER_INPUT appears close to NET/DB/EXEC/LOG. Let’s make the boundaries explicit (validation/allowlist/masking) and add tests for unsafe inputs.”

6) Playbook: Partial analysis (scope: PR includes out-of-profile files)

Signals

  • analysis=partial

What it means

The PR includes files outside CodeGuard’s analysis scope (currently non-Python: YAML/CSV/Markdown, etc.). The report for the Python part remains complete.

Risk you are reducing

Missing important changes in configs/data/docs because “the tool didn’t say anything”.

What the reviewer should do (fast path)

  • Review Python changes via CodeGuard: per-file risk → top hotspots → decision.

  • Review out-of-profile files manually:

    • configs: schema validity, backward compatibility, secrets, environments

    • data/CSV: format, migrations, size/diff noise

    • docs/Markdown: correctness of instructions/examples/commands

PR comment (correct wording)

analysis=partial — the PR includes out-of-profile files (configs/data/docs). CodeGuard analyzes only the Python portion; we’ll review those files manually.”

7) Mini templates: what to ask for in the PR (short, practical)

7.1 When to ask for rationale

“Is this an intentional architectural idea or accidental complexity? If intentional, add a short rationale and a contract test.”

7.2 When to ask for isolation

“This looks like a new subsystem. Let’s extract it into a separate module/class with an explicit API.”

7.3 When to ask for tests

“This change affects the mode/contract/effect tail. We need tests for failures/edge cases/idempotency.”

7.4 When to ask for split PR

“Right now analysis is partial. Please split the PR or reduce hunks so the report can be complete.”

8) A useful team habit

If Revieko repeatedly flags the same problem classes, the team can:

  • formalize rules in .revieko/semantics.yaml (invariants),

  • enforce the style “effects separate from computation”,

  • agree on: “if we introduce a new pattern → isolate + contract + tests”.

That turns one-off signals into systematic architectural improvement.

© 2026 • Build systems that reconstruct the structure of reality