Skip to content
Concept-Lab
← Back to Machine Learning

Machine Learning

Supervised learning, linear and logistic regression, gradient descent, cost functions, regularisation, and the full breadth of Andrew Ng's ML Specialisation.

0 / 114 completed0%
114 remainingShowing 114 of 114 nodes

Concepts Covered

Advanced Learning Algorithms

0/24 done
39

Learning Curves

How training and cross-validation error change as data grows, and what that tells you about whether collecting more data is worth it.

Interactive
40

Deciding What to Try Next, Revisited

How bias and variance map directly to the next engineering move, so you stop guessing and start debugging systematically.

Interactive
41

Bias, Variance, and Neural Networks

Why deep learning changed the old bias-variance tradeoff story and gave engineers a new recipe for improving models.

Interactive
42

Iterative Loop of ML Development

The real workflow of ML engineering: choose architecture, train, diagnose, refine, and repeat until performance is good enough.

Interactive
43

Error Analysis

Manual review of model mistakes to discover which error classes matter most and where engineering effort will pay off.

Interactive
44

Adding Data

Targeted data collection, augmentation, and synthetic data generation as strategic tools for improving model quality.

Interactive
45

Transfer Learning

Use a model pre-trained on a large related dataset, then fine-tune it on your smaller task to get strong results with limited data.

Interactive
46

Full Cycle of a Machine Learning Project

Training a model is only one stage; real ML systems also require scoping, deployment, monitoring, retraining, and MLOps discipline.

Lab
47

Fairness, Bias, and Ethics

Why ML engineers must think about harm, subgroup performance, and mitigation plans before and after deployment.

Interactive
48

Error Metrics for Skewed Datasets

Why accuracy becomes misleading on rare-event problems, and how the confusion matrix gives a more truthful view of model usefulness.

Lab
49

Trading Off Precision and Recall

How threshold choices change which rare events you catch, which false alarms you accept, and why F1 is a useful but incomplete summary.

Interactive
50

Decision Tree Model

A decision tree predicts by asking a sequence of feature-based questions, routing an example down branches until it reaches a leaf decision.

Interactive
51

Decision Tree Learning Process

How a tree is built recursively: choose the best split, partition the data, repeat on each branch, and stop when further splitting is no longer worth it.

Lab
52

Measuring Purity: Entropy

Entropy is the impurity measure that tells a decision tree how mixed a node is, with 0 meaning pure and 1 meaning maximally mixed in the binary case.

Lab
53

Choosing a Split with Information Gain

Information gain measures how much a candidate split reduces weighted entropy, allowing the tree to choose the most purity-improving feature.

Interactive
54

Decision Tree: Putting It Together

The full tree-building algorithm combines repeated split selection, recursive branch construction, and stopping rules into one practical training loop.

Lab
55

One-Hot Encoding of Categorical Features

How to convert a feature with multiple discrete categories into several binary indicators so trees and other models can use it cleanly.

Lab
56

Continuous-Valued Features

How trees handle numeric features by testing candidate thresholds and selecting the split with the highest information gain.

Lab
57

Regression Trees

Generalizing decision trees from class prediction to numeric prediction by minimizing weighted variance and predicting leaf averages.

Lab
58

Using Multiple Decision Trees

Why single trees are sensitive to small data changes and how voting across many trees improves robustness.

Lab
59

Sampling with Replacement

Bootstrap sampling creates new training sets by repeatedly drawing from the original set with replacement.

Lab
60

Random Forest Algorithm

Bagging plus random feature subsets per split yields more diverse trees and stronger aggregate performance.

Lab
61

XGBoost

Boosted trees focus sequentially on hard examples and are often top-performing on structured/tabular tasks.

Interactive
62

When to Use Decision Trees

Choosing between tree ensembles and neural networks based on data modality, iteration speed, interpretability, and transfer learning needs.

Lab