Skip to content
Concept-Lab
← Back to AI Engineering

LangChain

Chat models, prompt templates, chains, agents, and RAG implementations built with LangChain.

0 / 29 completed0%
29 remainingShowing 29 of 29 nodes
1

Introduction to LangChain

What LangChain is and why it exists — the standard framework for LLM apps.

Interactive
2

LangChain Overview

Core components: models, prompts, chains, memory, agents, tools.

Interactive
3

What is LangChain?

The Runnable interface, LCEL expression language, and composability philosophy.

Interactive
4

Prerequisites

Python basics, API keys, .env setup — everything before you write LangChain code.

Theory
5

Dev Environment Setup

virtualenv, installing langchain-openai, and .env management best practices.

Theory
6

Chat Models — Overview

The structured message format — SystemMessage, HumanMessage, AIMessage.

Interactive
7

Chat Models — Setup

Instantiating ChatOpenAI and making your first LangChain API call.

Interactive
8

Chat Models — Passing Chat History

How LLMs simulate memory — passing the full conversation list each call.

Lab
9

Chat Models — Alternative LLMs

Swapping providers with one line — Anthropic, Cohere, local Ollama.

Interactive
10

Chat Models — Real-time Conversation

Building a local multi-turn chat loop that keeps history in memory and answers follow-up questions correctly.

Lab
11

Chat Models — Cloud-Persisted History

Storing conversation history in Redis, DynamoDB, or Postgres for production.

Interactive
12

Prompt Templates

Parameterised, reusable, testable prompt construction — the clean production approach.

Lab
13

Chains — Overview

Composing prompts, models, and parsers into end-to-end LCEL pipelines.

Lab
14

Chains - Basic

Core single-chain construction from prompt to parsed output.

Interactive
15

Chains - Inner Workings

How data flows through LCEL components at runtime.

Lab
16

Chains - Sequential Chaining

Building linear multi-step workflows where each step feeds the next.

Lab
17

Chains - Parallel Chaining

Execute independent subchains concurrently to reduce latency.

Lab
18

Chains - Conditional Chaining

Route requests to different subchains based on runtime conditions.

Lab
19

RAGs Intro

Introduction to retrieval-augmented generation in LangChain.

Interactive
20

RAGs - Workflow Part 1

First part of practical RAG workflow implementation.

Interactive
21

RAGs - Embeddings & Vector DBs

Embedding generation and vector database indexing fundamentals.

Lab
22

RAGs - Work-Flow Part 1 - (cont.)

Continuation of workflow setup and retrieval wiring.

Interactive
23

RAGs - Work-Flow Part 2

Second part of end-to-end RAG workflow implementation.

Interactive
24

RAGs - Basic Example (1)

Ingestion pipeline: load a document, chunk it, embed it, and persist it in a local vector store.

Lab
25

RAGs - Basic Example (2)

Query-time retrieval: load the vector store, embed the question, and tune threshold and top-k to return the right chunks.

Lab
26

RAGs - With Metadata

Attach source information to chunks so retrieval returns both evidence and provenance.

Interactive
27

RAGs - One-off Question

Build one grounded prompt from retrieved chunks and answer statelessly from those documents only.

Interactive
28

Agents & Tools - Intro

Introduction to tool-using agent workflows in LangChain.

Lab
29

Agents & Tools - Deep Dive

Detailed agent execution flow, planning, and tool-calling behavior.

Lab