LangChain
Chat models, prompt templates, chains, agents, and RAG implementations built with LangChain.
Introduction to LangChain
What LangChain is and why it exists — the standard framework for LLM apps.
LangChain Overview
Core components: models, prompts, chains, memory, agents, tools.
What is LangChain?
The Runnable interface, LCEL expression language, and composability philosophy.
Prerequisites
Python basics, API keys, .env setup — everything before you write LangChain code.
Dev Environment Setup
virtualenv, installing langchain-openai, and .env management best practices.
Chat Models — Overview
The structured message format — SystemMessage, HumanMessage, AIMessage.
Chat Models — Setup
Instantiating ChatOpenAI and making your first LangChain API call.
Chat Models — Passing Chat History
How LLMs simulate memory — passing the full conversation list each call.
Chat Models — Alternative LLMs
Swapping providers with one line — Anthropic, Cohere, local Ollama.
Chat Models — Real-time Conversation
Building a local multi-turn chat loop that keeps history in memory and answers follow-up questions correctly.
Chat Models — Cloud-Persisted History
Storing conversation history in Redis, DynamoDB, or Postgres for production.
Prompt Templates
Parameterised, reusable, testable prompt construction — the clean production approach.
Chains — Overview
Composing prompts, models, and parsers into end-to-end LCEL pipelines.
Chains - Basic
Core single-chain construction from prompt to parsed output.
Chains - Inner Workings
How data flows through LCEL components at runtime.
Chains - Sequential Chaining
Building linear multi-step workflows where each step feeds the next.
Chains - Parallel Chaining
Execute independent subchains concurrently to reduce latency.
Chains - Conditional Chaining
Route requests to different subchains based on runtime conditions.
RAGs Intro
Introduction to retrieval-augmented generation in LangChain.
RAGs - Workflow Part 1
First part of practical RAG workflow implementation.
RAGs - Embeddings & Vector DBs
Embedding generation and vector database indexing fundamentals.
RAGs - Work-Flow Part 1 - (cont.)
Continuation of workflow setup and retrieval wiring.
RAGs - Work-Flow Part 2
Second part of end-to-end RAG workflow implementation.
RAGs - Basic Example (1)
Ingestion pipeline: load a document, chunk it, embed it, and persist it in a local vector store.
RAGs - Basic Example (2)
Query-time retrieval: load the vector store, embed the question, and tune threshold and top-k to return the right chunks.
RAGs - With Metadata
Attach source information to chunks so retrieval returns both evidence and provenance.
RAGs - One-off Question
Build one grounded prompt from retrieved chunks and answer statelessly from those documents only.
Agents & Tools - Intro
Introduction to tool-using agent workflows in LangChain.
Agents & Tools - Deep Dive
Detailed agent execution flow, planning, and tool-calling behavior.