Multi-label classification is a problem where a single input can have multiple labels true at the same time. This is distinct from multiclass classification where each example belongs to exactly one class.
Example: autonomous driving โ given a camera frame, predict whether there is a car (yes/no), a bus (yes/no), and a pedestrian (yes/no). One image can contain all three, none, or any combination.
Two approaches to multi-label classification:
- Separate models: Train one binary classifier per label. Simple but ignores shared features.
- Single shared network: Train one network with multiple sigmoid outputs โ one per label. Shared hidden layers learn common feature representations; each output head is independent binary classification.
Architecture for approach 2:
Output layer: 3 sigmoid units (not softmax)
Each unit independently predicts: P(car), P(bus), P(pedestrian)
Note: use sigmoid (not softmax) for multi-label โ each output is independent and probabilities don't need to sum to 1.
Interview-Ready Deepening
Source-backed reinforcement: these points add detail beyond short-duration UI hints and emphasize production tradeoffs.
- When one input can have multiple independent labels simultaneously โ cars AND pedestrians in one image.
- These are examples of multi-label classification problems because associated with a single input, image X are three different labels corresponding to whether or not there are any cars, buses, or pedestrians in the image.
- Multi-label classification is a problem where a single input can have multiple labels true at the same time .
- There's a different type of classification problem called a multi-label classification problem, which is where associate of each image, they could be multiple labels.
- Single shared network: Train one network with multiple sigmoid outputs โ one per label. Shared hidden layers learn common feature representations; each output head is independent binary classification.
- Output layer: 3 sigmoid units (not softmax) Each unit independently predicts: P(car), P(bus), P(pedestrian) Note: use sigmoid (not softmax) for multi-label โ each output is independent and probabilities don't need to sum to 1.
- In this case, there is a car, there is no bus, and there is at least one pedestrian or in this second image, no cars, no buses and yes to pedestrians and yes car, yes bus and no pedestrians.
- This is distinct from multiclass classification where each example belongs to exactly one class.
Tradeoffs You Should Be Able to Explain
- More expressive models improve fit but can reduce interpretability and raise overfitting risk.
- Higher optimization speed can reduce training time but may increase instability if learning dynamics are not monitored.
- Feature-rich pipelines improve performance ceilings but increase maintenance and monitoring complexity.
First-time learner note: Read each model as a dataflow system: inputs become representations, representations become scores, and scores become decisions through a chosen loss and thresholding policy.
Production note: Track three things relentlessly in ML systems: data shape contracts, evaluation methodology, and the operational meaning of the model's errors. Most expensive failures come from one of those three.
Multi-label classification is not just multiclass with more outputs. The semantics are different. In multiclass, one label wins. In multi-label, each label is its own yes/no question. That is why independent sigmoid outputs make sense here, while softmax would be wrong because it would force the probabilities to compete unnecessarily.
Architecture flow: shared hidden representation -> one sigmoid head per label. This lets the network reuse shared visual or semantic features while still making independent decisions for each label.