Machine Learning
Supervised learning, linear and logistic regression, gradient descent, cost functions, regularisation, and the full breadth of Andrew Ng's ML Specialisation.
Concepts Covered
Supervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneUnsupervised, Recommenders & Reinforcement
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/1 doneSupervised Learning Algorithms
0/1 doneAdvanced Learning Algorithms
0/24 doneLearning Curves
How training and cross-validation error change as data grows, and what that tells you about whether collecting more data is worth it.
Deciding What to Try Next, Revisited
How bias and variance map directly to the next engineering move, so you stop guessing and start debugging systematically.
Bias, Variance, and Neural Networks
Why deep learning changed the old bias-variance tradeoff story and gave engineers a new recipe for improving models.
Iterative Loop of ML Development
The real workflow of ML engineering: choose architecture, train, diagnose, refine, and repeat until performance is good enough.
Error Analysis
Manual review of model mistakes to discover which error classes matter most and where engineering effort will pay off.
Adding Data
Targeted data collection, augmentation, and synthetic data generation as strategic tools for improving model quality.
Transfer Learning
Use a model pre-trained on a large related dataset, then fine-tune it on your smaller task to get strong results with limited data.
Full Cycle of a Machine Learning Project
Training a model is only one stage; real ML systems also require scoping, deployment, monitoring, retraining, and MLOps discipline.
Fairness, Bias, and Ethics
Why ML engineers must think about harm, subgroup performance, and mitigation plans before and after deployment.
Error Metrics for Skewed Datasets
Why accuracy becomes misleading on rare-event problems, and how the confusion matrix gives a more truthful view of model usefulness.
Trading Off Precision and Recall
How threshold choices change which rare events you catch, which false alarms you accept, and why F1 is a useful but incomplete summary.
Decision Tree Model
A decision tree predicts by asking a sequence of feature-based questions, routing an example down branches until it reaches a leaf decision.
Decision Tree Learning Process
How a tree is built recursively: choose the best split, partition the data, repeat on each branch, and stop when further splitting is no longer worth it.
Measuring Purity: Entropy
Entropy is the impurity measure that tells a decision tree how mixed a node is, with 0 meaning pure and 1 meaning maximally mixed in the binary case.
Choosing a Split with Information Gain
Information gain measures how much a candidate split reduces weighted entropy, allowing the tree to choose the most purity-improving feature.
Decision Tree: Putting It Together
The full tree-building algorithm combines repeated split selection, recursive branch construction, and stopping rules into one practical training loop.
One-Hot Encoding of Categorical Features
How to convert a feature with multiple discrete categories into several binary indicators so trees and other models can use it cleanly.
Continuous-Valued Features
How trees handle numeric features by testing candidate thresholds and selecting the split with the highest information gain.
Regression Trees
Generalizing decision trees from class prediction to numeric prediction by minimizing weighted variance and predicting leaf averages.
Using Multiple Decision Trees
Why single trees are sensitive to small data changes and how voting across many trees improves robustness.
Sampling with Replacement
Bootstrap sampling creates new training sets by repeatedly drawing from the original set with replacement.
Random Forest Algorithm
Bagging plus random feature subsets per split yields more diverse trees and stronger aggregate performance.
XGBoost
Boosted trees focus sequentially on hard examples and are often top-performing on structured/tabular tasks.
When to Use Decision Trees
Choosing between tree ensembles and neural networks based on data modality, iteration speed, interpretability, and transfer learning needs.