Adding data should be guided, not generic. The source note makes an important point: "more data" is often helpful, but "more data of everything" can be slow and expensive. Error analysis should tell you which slice of the data deserves focused collection.
Targeted collection: if pharma spam dominates your mistakes, collect more pharma spam examples instead of just more email overall. This is often much cheaper and more effective than broad, unfocused data growth.
Data augmentation: create additional examples by transforming existing ones while preserving the label. For images, this could mean rotation, resizing, contrast changes, or warping. For audio, it could mean background noise, microphone degradation, or channel distortion. The core rule is that the augmentation must resemble noise the model will actually face at test time.
Data synthesis: generate brand-new training examples from scratch. The OCR example is a classic case: synthesize text with many fonts, colors, and layouts. This can massively expand training data if the synthetic distribution is realistic enough.
Data-centric AI insight: for years, most researchers fixed the data and improved the model. In many real projects today, the model family is already strong enough, and the most productive improvement comes from engineering the data: labels, failure slices, augmentation policy, or synthetic generation.
Architecture note: data work should be versioned just like model code. If augmentation or synthesis changes the effective training distribution, that is an architectural change, not just "preprocessing." It needs evaluation and rollback discipline.
Interview-Ready Deepening
Source-backed reinforcement: these points add detail beyond short-duration UI hints and emphasize production tradeoffs.
- Targeted data collection, augmentation, and synthetic data generation as strategic tools for improving model quality.
- In many real projects today, the model family is already strong enough, and the most productive improvement comes from engineering the data: labels, failure slices, augmentation policy, or synthetic generation.
- Synthetic data generation has been used most probably for computer vision tasks and less for other applications.
- Data augmentation: create additional examples by transforming existing ones while preserving the label.
- The core rule is that the augmentation must resemble noise the model will actually face at test time.
- This can massively expand training data if the synthetic distribution is realistic enough.
- Data-centric AI insight: for years, most researchers fixed the data and improved the model.
- Instead, an alternative way of adding data might be to focus on adding more data of the types where analysis has indicated it might help.
Tradeoffs You Should Be Able to Explain
- More agent autonomy increases adaptability but also increases non-determinism and debugging effort.
- Tool-heavy loops improve grounding, but latency and failure surfaces rise with each external dependency.
- Fine-grained state graphs improve control, but poor state contracts can create brittle routing behavior.
First-time learner note: Read each model as a dataflow system: inputs become representations, representations become scores, and scores become decisions through a chosen loss and thresholding policy.
Production note: Track three things relentlessly in ML systems: data shape contracts, evaluation methodology, and the operational meaning of the model's errors. Most expensive failures come from one of those three.
Data improvements should be slice-aware. Collecting more examples is most useful when guided by failure distribution, not by total volume alone.
Data-centric discipline: augmentation and synthesis are model changes in disguise because they alter training distribution. They need the same validation rigor as architecture changes.