Apr
27
Date: 27-April-2026
Time: 12:00 PM ET (New York Time)
Presenter: Dr. Abbas Yazdinejad
Based on the IEEE Xplore® article under the same title
Published IEEE Transactions on Information Forensics and Security, June 2024.
Download article: Original article will be made publicly available for download on the day of the webinar for 48 hours. ARTICLE LINK
Abstract
Federated learning enables collaborative model training without direct data sharing, offering an important layer of privacy for distributed machine learning systems. However, federated learning remains vulnerable to model poisoning attacks, where malicious participants submit manipulated updates to degrade model performance. These risks become more severe in privacy-preserving settings, where encrypted gradients limit the applicability of traditional defense mechanisms and complicate anomaly detection, especially under heterogeneous data distributions.
This talk presents a robust privacy-preserving federated learning framework designed to defend against model poisoning attacks while maintaining efficiency and model accuracy. The proposed approach introduces an internal auditing mechanism capable of analyzing encrypted gradients without exposing sensitive information. By combining additive homomorphic encryption with statistical modeling techniques, the system distinguishes benign and malicious updates even in non-independent and non-identically distributed environments. The framework incorporates probabilistic clustering and distance-based analysis to support Byzantine-tolerant aggregation while minimizing computational and communication overhead. Experimental results across multiple datasets demonstrate that the proposed method significantly improves robustness against both targeted and untargeted poisoning attacks, achieving strong accuracy, scalability, and privacy guarantees. This work highlights how carefully designed encrypted analytics can strengthen the security of federated learning systems without sacrificing practicality.
Biography
Abbas Yazdinejad received the Ph.D. in computational sciences – cybersecurity from the University of Guelph in Canada in 2024.
He is currently an Assistant Professor in the Department of Computer Science at the University of Regina, SK, Canada and Director of the Decentralized Cybersecurity & Artificial Intelligence Lab (DCAILab). He is also a Balsillie Scholar at the Balsillie School of International Affairs (BSIA), Waterloo, Canada. His research focuses on advanced AI-driven cybersecurity and trustworthy intelligent systems, with particular emphasis on Agentic AI, autonomous cyber defense, privacy-preserving and federated learning, and AI governance.
Dr. Yazdinejad has been recognized among the World’s Top 2% Scientists (Stanford University ranking). He has made sustained contributions across AI, machine learning, cybersecurity, and decentralized systems, blockchain, IoT with extensive publications in leading IEEE, Elsevier, and Springer venues. His work bridges theory and practice, addressing real-world challenges in critical infrastructure, healthcare, and public-sector governance through secure, accountable, and scalable AI systems.
