Skip to content
Concept-Lab
← LangGraphπŸ•ΈοΈ 12 / 42
LangGraph

Reflexion Agent - Building Responder Chain

Build responder output contract: draft answer + critique + search terms for evidence collection.

Core Theory

The responder chain is the planning interface for Reflexion. It should not only draft an answer; it must also generate machine-usable signals for what evidence to fetch next.

Recommended output contract:

  • answer: first-pass response.
  • critique: weaknesses in coverage/factuality.
  • search_queries: concrete evidence intents.
  • confidence (optional): confidence prior for routing policy.

Why typed output matters: tool node can execute immediately from search_queries without brittle parsing, and router can use confidence/flags deterministically.

Prompting guidance: force the responder to separate "known facts" from "needs verification" so search intents are high signal.

Failure mode: vague critiques like "add more detail" with no actionable query intents. Mitigate by requiring at least N specific search queries whenever confidence is below threshold.

Deepening Notes

Source-backed reinforcement: these points are extracted from the LangGraph source note to sharpen architecture and flow intuition.

  • is going to be the responder agent okay so the reason why we call both of these agents as SubCom of the actor is because the base chart promt template The Prompt template is going
  • it all right so all that I've done here is that I've just called it the actor agent prompt and then I've imported you know chat prompt template and the messages play soer no differ
  • ate and the messages play soer no different to what we've done in the reflection agent in the previous section all right so let me let's actually go back to the diagram let's see w
  • eflection class we have the missing as well as superflow so in the missing it is saying the current answer Lacks specific examples of AI tools or services that small businesses can
  • or the revisor agent and once we're done with that then we'll come back to building the execute tools so I hope you learned a lot and

Interview-Ready Deepening

Source-backed reinforcement: these points add detail beyond short-duration UI hints and emphasize production tradeoffs.

  • Build responder output contract: draft answer + critique + search terms for evidence collection.
  • It should not only draft an answer; it must also generate machine-usable signals for what evidence to fetch next.
  • The responder chain is the planning interface for Reflexion.
  • Why typed output matters: tool node can execute immediately from search_queries without brittle parsing, and router can use confidence/flags deterministically.
  • Prompting guidance: force the responder to separate "known facts" from "needs verification" so search intents are high signal.
  • Failure mode: vague critiques like "add more detail" with no actionable query intents.
  • Mitigate by requiring at least N specific search queries whenever confidence is below threshold.
  • More agent autonomy increases adaptability but also increases non-determinism and debugging effort.

Tradeoffs You Should Be Able to Explain

  • More agent autonomy increases adaptability but also increases non-determinism and debugging effort.
  • Tool-heavy loops improve grounding, but latency and failure surfaces rise with each external dependency.
  • Fine-grained state graphs improve control, but poor state contracts can create brittle routing behavior.

First-time learner note: Think in state transitions, not giant prompts. Keep node responsibilities small and route logic deterministic so each step is easy to reason about.

Production note: Bound autonomy with loop limits, tool policies, and checkpoints. Capture route decisions and state snapshots for replay and incident analysis.

🧾 Comprehensive Coverage

Exhaustive coverage points to ensure complete topic understanding without missing core concepts.

Loading interactive module...

πŸ’‘ Concrete Example

Responder output example: { "answer": "The policy allows refund within 14 days.", "critique": ["Missing enterprise exception clause", "No citation included"], "search_queries": ["enterprise refund exception policy", "refund policy clause section id"], "confidence": 0.61 } Tool node executes both queries, stores normalized observations, and hands state to reviser.

🧠 Beginner-Friendly Examples

Guided Starter Example

Responder output example: { "answer": "The policy allows refund within 14 days.", "critique": ["Missing enterprise exception clause", "No citation included"], "search_queries": ["enterprise refund exception policy", "refund policy clause section id"], "confidence": 0.61 } Tool node executes both queries, stores normalized observations, and hands state to reviser.

Source-grounded Practical Scenario

Build responder output contract: draft answer + critique + search terms for evidence collection.

Source-grounded Practical Scenario

It should not only draft an answer; it must also generate machine-usable signals for what evidence to fetch next.

🧭 Architecture Flow

Loading interactive module...

🎬 Interactive Visualization

Loading interactive module...

πŸ›  Interactive Tool

Loading interactive module...

πŸ§ͺ Interactive Sessions

  1. Concept Drill: Manipulate key parameters and observe behavior shifts for Reflexion Agent - Building Responder Chain.
  2. Failure Mode Lab: Trigger an edge case and explain remediation decisions.
  3. Architecture Reorder Exercise: Reorder 5 flow steps into the correct production sequence.

πŸ’» Code Walkthrough

Responder chain details are in the Reflexion chain module.

content/github_code/langgraph/4_reflexion_agent_system/chains.py

Responder/revisor prompts and tool-bound structured outputs.

Open highlighted code β†’

content/github_code/langgraph/4_reflexion_agent_system/schema.py

Pydantic output schema for answer, critique, and references.

Open highlighted code β†’
  1. Focus on first_responder_chain and tool binding.

🎯 Interview Prep

Questions an interviewer is likely to ask about this topic. Think through your answer before reading the senior angle.

  • Q1[beginner] Why should responder emit critique and search terms together?
    Strong answer structure: define the concept in one sentence, ground it in a concrete scenario (Build responder output contract: draft answer + critique + search terms for evidence collection.), then explain one tradeoff (More agent autonomy increases adaptability but also increases non-determinism and debugging effort.) and how you'd monitor it in production.
  • Q2[intermediate] What schema fields are minimum for this chain?
    Strong answer structure: define the concept in one sentence, ground it in a concrete scenario (Build responder output contract: draft answer + critique + search terms for evidence collection.), then explain one tradeoff (More agent autonomy increases adaptability but also increases non-determinism and debugging effort.) and how you'd monitor it in production.
  • Q3[expert] How do you validate responder outputs before tool execution?
    Strong answer structure: define the concept in one sentence, ground it in a concrete scenario (Build responder output contract: draft answer + critique + search terms for evidence collection.), then explain one tradeoff (More agent autonomy increases adaptability but also increases non-determinism and debugging effort.) and how you'd monitor it in production.
  • Q4[expert] How would you explain this in a production interview with tradeoffs?
    Show chain contract thinking, not just prompt writing.
πŸ† Senior answer angle β€” click to reveal
Use the tier progression: beginner correctness -> intermediate tradeoffs -> expert production constraints and incident readiness.

πŸ“š Revision Flash Cards

Test yourself before moving on. Flip each card to check your understanding β€” great for quick revision before an interview.

Loading interactive module...