AI bias occurs when a model produces systematically unfair or inaccurate results for certain groups.
**Famous examples:**
- Amazon's hiring tool downgraded CVs with 'women's' — trained on historical male-dominated hiring data
- Facial recognition with high error rates for darker skin tones — trained mostly on lighter skin images
- Predictive policing algorithms over-targeting minority neighbourhoods
**Sources of bias:**
- **Training data bias**: reflects historical inequalities
- **Label bias**: human labellers bring their own biases
- **Measurement bias**: different quality data for different groups
**Mitigations:** Diverse training data, fairness metrics, regular audits, diverse teams building the models.