Skip to content
Concept-Lab
← LangGraphπŸ•ΈοΈ 6 / 42
LangGraph

Reflection Agent - Introduction

Reflection agents add a critique stage so outputs can be improved before finalization.

Core Theory

A reflection agent introduces self-critique into the graph loop. Instead of returning the first draft, the system generates output, critiques it, and revises if needed.

Minimal reflection architecture:

  • Generator node: produce draft response.
  • Reflector node: evaluate quality against rubric (factuality, completeness, policy, tone).
  • Router: if quality below threshold, route back for revision; otherwise finalize.

Why this matters: many LLM errors are fixable in one additional pass. Reflection catches omission, weak structure, and policy violations before user exposure.

Operational constraints: reflection increases latency and token usage, so it should be conditional (risk-based or confidence-based), not always-on for every request.

Failure modes:

  • Over-criticizing and looping without convergence.
  • Reflector hallucinating issues that are not real.
  • Generator and reflector objectives misaligned.

Production guidance: enforce max reflection rounds, explicit scoring rubric, and final fallback route when improvement plateaus.

Deepening Notes

Source-backed reinforcement: these points are extracted from the LangGraph source note to sharpen architecture and flow intuition.

  • ticularly important because this is sort of like the entry point for you to start learning L graph and you know how powerful you know um certain agents can be okay so in this secti
  • is reflection agent in your you know workflows so before we jump into you know all the integrities of you know implementing it using Code etc etc before that let's actually just un
  • hat we are interested in which is the reflection agent pattern where it is going to be a little slow if you think about it since we have two agents right here one is generating one
  • now if you want it to be reliable you can go for this reflection agent pattern right here so this is another simple reflection Loop diagram that the documentation provides so first
  • ou know learning the other types is going to be very simple for us so let's actually go ahead and Implement a basic reflection agent using L graph so

Interview-Ready Deepening

Source-backed reinforcement: these points add detail beyond short-duration UI hints and emphasize production tradeoffs.

  • Reflection agents add a critique stage so outputs can be improved before finalization.
  • A reflection agent introduces self-critique into the graph loop.
  • Reflection catches omission, weak structure, and policy violations before user exposure.
  • Instead of returning the first draft, the system generates output, critiques it, and revises if needed.
  • Operational constraints: reflection increases latency and token usage, so it should be conditional (risk-based or confidence-based), not always-on for every request.
  • Production guidance: enforce max reflection rounds, explicit scoring rubric, and final fallback route when improvement plateaus.
  • More agent autonomy increases adaptability but also increases non-determinism and debugging effort.
  • Minimal reflection architecture: Generator node : produce draft response.

Tradeoffs You Should Be Able to Explain

  • More agent autonomy increases adaptability but also increases non-determinism and debugging effort.
  • Tool-heavy loops improve grounding, but latency and failure surfaces rise with each external dependency.
  • Fine-grained state graphs improve control, but poor state contracts can create brittle routing behavior.

First-time learner note: Think in state transitions, not giant prompts. Keep node responsibilities small and route logic deterministic so each step is easy to reason about.

Production note: Bound autonomy with loop limits, tool policies, and checkpoints. Capture route decisions and state snapshots for replay and incident analysis.

🧾 Comprehensive Coverage

Exhaustive coverage points to ensure complete topic understanding without missing core concepts.

Loading interactive module...

πŸ’‘ Concrete Example

Reflection loop walkthrough: 1) Generator produces draft response. 2) Reflector scores against rubric and flags missing clause. 3) Router sends one revision pass. 4) Revised draft includes missing evidence and better structure. 5) Score crosses threshold and graph finalizes.

🧠 Beginner-Friendly Examples

Guided Starter Example

Reflection loop walkthrough: 1) Generator produces draft response. 2) Reflector scores against rubric and flags missing clause. 3) Router sends one revision pass. 4) Revised draft includes missing evidence and better structure. 5) Score crosses threshold and graph finalizes.

Source-grounded Practical Scenario

Reflection agents add a critique stage so outputs can be improved before finalization.

Source-grounded Practical Scenario

A reflection agent introduces self-critique into the graph loop.

🧭 Architecture Flow

Loading interactive module...

🎬 Interactive Visualization

Loading interactive module...

πŸ›  Interactive Tool

Loading interactive module...

πŸ§ͺ Interactive Sessions

  1. Concept Drill: Manipulate key parameters and observe behavior shifts for Reflection Agent - Introduction.
  2. Failure Mode Lab: Trigger an edge case and explain remediation decisions.
  3. Architecture Reorder Exercise: Reorder 5 flow steps into the correct production sequence.

πŸ’» Code Walkthrough

Reflection system starter implementation.

  1. Observe iterative critique loop and stop condition.

🎯 Interview Prep

Questions an interviewer is likely to ask about this topic. Think through your answer before reading the senior angle.

  • Q1[beginner] What does a reflection agent add that a normal ReAct loop does not?
    Reflection adds a critique-and-revise stage that a basic ReAct loop may not include. It is optimized for answer quality control, not just action selection.
  • Q2[beginner] How do you decide whether to run reflection for a request?
    Run reflection conditionally using risk level, confidence, or compliance sensitivity. Always-on reflection is expensive and unnecessary for low-risk/simple intents.
  • Q3[intermediate] What controls prevent endless reflection loops?
    Prevent endless reflection with hard round caps, minimum-improvement thresholds, and explicit fallback/escalation routes when quality plateaus.
  • Q4[expert] How do you measure if reflection is worth its extra latency?
    Evaluate reflection ROI by comparing quality lift against added latency and token cost on representative traffic slices.
  • Q5[expert] How would you explain this in a production interview with tradeoffs?
    Treat reflection as a quality-control subsystem with explicit ROI: quality lift vs added latency and token cost.
πŸ† Senior answer angle β€” click to reveal
Use the tier progression: beginner correctness -> intermediate tradeoffs -> expert production constraints and incident readiness.

πŸ“š Revision Flash Cards

Test yourself before moving on. Flip each card to check your understanding β€” great for quick revision before an interview.

Loading interactive module...