With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies.
"synopsis" may belong to another edition of this title.
Seller: PBShop.store UK, Fairford, GLOS, United Kingdom
HRD. Condition: New. New Book. Shipped from UK. Established seller since 2000. Seller Inventory # L2-9798337362243
Quantity: Over 20 available
Seller: preigu, Osnabrück, Germany
Buch. Condition: Neu. Adversarial AI and Data Poisoning in Federated Learning | Vipul Jain (u. a.) | Buch | Englisch | 2026 | IGI GLOBAL SCIENTIFIC PUBLISHING | EAN 9798337362243 | Verantwortliche Person für die EU: Libri GmbH, Europaallee 1, 36244 Bad Hersfeld, gpsr[at]libri[dot]de | Anbieter: preigu Print on Demand. Seller Inventory # 134617359
Seller: AHA-BUCH GmbH, Einbeck, Germany
Buch. Condition: Neu. nach der Bestellung gedruckt Neuware - Printed after ordering - With the growing security challenges at the intersection of distributed machine learning and malicious interference, there are growing challenges that federated learning can address. Federated learning enables collaborative model training across devices while preserving data privacy. However, this decentralized nature also opens new vulnerabilities, particularly to adversarial attacks and data poisoning, where malicious actors can inject corrupted data or manipulate updates to degrade models or extract sensitive information. As the adoption of federated learning accelerates, understanding and these threats are essential to ensure model integrity and resilience in real-world situations. Adversarial AI and Data Poisoning in Federated Learning provides a comprehensive examination of emerging threats, attack vectors, and defense mechanisms within federal learning systems. This book highlights vulnerabilities of federated learning architectures, explores strategies for detection and mitigation of adversarial threats, and presents real-world case studies. Seller Inventory # 9798337362243