
With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies.
Page Count:
574
Publication Date:
2025-11-14
ISBN-13:
9798337362250
No comments yet. Be the first to share your thoughts!