Skip to content
Concept-Lab
← Back to Machine Learning

Supervised Learning Algorithms

Transcript-backed ML fundamentals: linear regression, logistic regression, gradient descent, feature scaling, overfitting, and regularization.

0 / 39 completed0%
39 remainingShowing 39 of 39 nodes

Concepts Covered

Supervised Learning Algorithms

0/39 done
1

Introduction to Machine Learning

What ML is, where you already use it daily, and why this matters.

Interactive
2

Why Machine Learning Matters

ML as the dominant path to AI; the $13-trillion opportunity ahead.

Interactive
3

ML Definition & Types

Supervised, unsupervised, and reinforcement learning — when to use each.

Lab
4

Supervised Learning — Regression

Predicting continuous output values — the engine behind 99% of ML's economic value.

Interactive
5

Supervised Learning — Classification

Predicting discrete categories rather than continuous values.

Lab
6

Unsupervised Learning

Finding hidden structure in data with no labels — clustering, anomaly detection, and more.

Interactive
7

Unsupervised — Anomaly Detection

Detecting fraud, defects, and outliers — the three types of unsupervised learning.

Interactive
8

Jupyter Labs & Dev Environment

The industry-standard ML environment — the exact same tool used at Google, Meta, and Amazon.

Interactive
9

Linear Regression Pipeline

Your first supervised learning model — probably the most widely used ML algorithm in the world.

Interactive
10

The Supervised Learning Pipeline

How supervised learning actually works end-to-end — training set in, function out.

Interactive
11

Cost Function

Measuring how wrong your model is — Mean Squared Error (MSE) explained.

Interactive
12

Cost Function Intuition

What the cost function looks like — and why the bowl shape matters.

Interactive
13

Cost Visualisation in 3D

Contour plots and the 3D bowl — seeing the optimisation landscape with two parameters.

Interactive
14

Parameters, Model & Cost — Together

Connecting the model line, cost function, and contour plot into one unified picture.

Interactive
15

Gradient Descent — Concept

The core optimisation algorithm that trains virtually every ML model.

Interactive
16

Gradient Descent — Update Rule

The actual update equations — the math behind every gradient step.

Lab
17

Derivative Intuition for Gradient Descent

The tangent line trick — why the sign and magnitude of the gradient guide every step.

Interactive
18

Learning Rate

The most critical hyperparameter — too large diverges, too small barely moves.

Interactive
19

Completing Linear Regression

The complete training loop: model + cost + gradient derivation all in one.

Interactive
20

Gradient Descent — Live Demo

Watching the algorithm actually run — the parameter trajectory toward the minimum.

Interactive
21

Multiple Linear Regression

Extending to many features simultaneously — the vectorised dot product form.

Lab
22

Vectorisation

Why vectorised code is 100× faster — numpy and hardware parallelism.

Lab
23

Vectorisation — Under the Hood

How NumPy, BLAS, and GPU kernels actually execute computations in parallel.

Interactive
24

Feature Scaling

Normalising features so gradient descent converges faster — a must-do step.

Interactive
25

Implementing Feature Scaling

Coding z-score normalisation from scratch; using sklearn's StandardScaler.

Interactive
26

Gradient Descent Convergence

The learning curve — how to tell when training is done and when it's broken.

Interactive
27

Choosing the Learning Rate

The log-scale sweep strategy for finding a good α systematically.

Interactive
28

Feature Engineering

Creating better input features using domain knowledge — often the biggest performance lever.

Interactive
29

Polynomial Regression

Fitting curves not just lines — by engineering x², x³ as new features.

Interactive
30

Classification — Deep Dive

Why linear regression fails for classification and what to use instead.

Interactive
31

Logistic Regression

The sigmoid function — squashing any real number into a probability [0, 1].

Lab
32

Decision Boundary

Where the model draws the line between classes — linear and non-linear boundaries.

Interactive
33

Logistic Regression — Cost Function

Why MSE creates non-convex surfaces for classification; introducing log loss.

Lab
34

Simplified Logistic Loss

Combining the y=0 and y=1 cases into one elegant unified formula.

Lab
35

Gradient Descent for Logistic Regression

Same update rule as linear regression — but with sigmoid applied underneath.

Lab
36

Overfitting & Underfitting

The bias-variance tradeoff — the single most important concept in applied ML.

Interactive
37

Regularisation — Concept

Adding a penalty for large weights — the elegant way to prevent overfitting.

Interactive
38

Regularisation — Math for Linear Regression

L2 penalty added to MSE; weight decay in the gradient update.

Interactive
39

Regularised Logistic Regression

Applying L2 regularisation to logistic regression — the production standard.

Interactive