WeeBytes
Start for free
What is Federated Learning in Machine Learning?
BeginnerAI & MLMachine LearningKnowledge

What is Federated Learning in Machine Learning?

Federated learning trains machine learning models across many decentralized devices without ever moving raw data to a central server. Only model updates — gradients — are shared and aggregated. This enables AI to learn from sensitive, distributed data while preserving privacy at the source.

In traditional machine learning, you collect data from all sources into a central repository, train a model on it, and deploy. This works but creates massive privacy risks, regulatory burdens, and logistical challenges when data is sensitive, regulated, or simply too large to move. Federated learning inverts this architecture. Each participating device — a hospital server, a mobile phone, an edge sensor — trains a local model update using only its own data. These updates (gradients or model deltas) are sent to a central aggregator, combined using an algorithm like Federated Averaging (FedAvg), and used to improve the global model. The raw data never leaves its origin. Google pioneered federated learning in production to improve Gboard's next-word prediction without reading users' keystrokes. Apple uses it for Siri and keyboard personalization. In healthcare, federated learning enables hospitals to collaboratively train diagnostic models on patient data they cannot legally share. Federated learning is not a complete privacy solution — gradient updates can leak information under adversarial conditions — but combined with techniques like differential privacy and secure aggregation, it forms a robust privacy-preserving ML framework applicable wherever data sovereignty or regulation prevents centralized training.

federated-learningprivacy-preserving-mldistributed-training

Want more like this?

WeeBytes delivers 25 cards like this every day — personalised to your interests.

Start learning for free