SPS Webinar: Federated Learning Using Three-Operator ADMM

Date: 29 March 2024
Time: 8:00 AM ET (New York Time)
Presenter(s): Dr. Shashi Kant, Dr. José Mairton B. da Silva, Jr.

Original article: Download Open Access article

Abstract

Federated learning (FL) has emerged as an instance of distributed machine learning paradigm that avoids the transmission of data generated on the users’ side. Although data is not transmitted, edge devices have to deal with limited communication bandwidths, data heterogeneity, and straggler effects due to the limited computational resources of users’ devices. A prominent approach to overcome such difficulties is FedADMM, which is based on the classical two-operator consensus alternating direction method of multipliers (ADMM). The common assumption of FL algorithms, including FedADMM, is that they learn a global model using data only on the users’ side and not on the edge server. However, in edge learning, the server is expected to be near the base station and often has direct access to rich datasets.

In this webinar, the presenters argue that it is much more beneficial to leverage the rich data on the edge server then utilizing only user datasets. Specifically, they show that the mere application of FL with an additional virtual user node representing the data on the edge server is inefficient. They propose FedTOP-ADMM, which generalizes FedADMM and is based on a three-operator ADMM-type technique that exploits a smooth cost function on the edge server to learn a global model in parallel to the edge devices. Their numerical experiments indicate that FedTOP-ADMM has substantial gain up to 33% in communication efficiency to reach a desired test accuracy with respect to FedADMM, including a virtual user on the edge server.

Biography

Shashi Kant received the M.Sc.E.E. degree in signal and information processing for communications from Aalborg University, Denmark, in 2007 and the Ph.D. degree in electrical engineering from the KTH Royal Institute of Technology, Sweden, in 2022.

He is currently a Senior RAN Data Scientist specializing in data science, mathematical optimization, radio algorithms, and network/link-level simulations with Ericsson AB, Sweden. From September 2007 to July 2008, he was a Research Assistant with Aalborg University and an External Researcher at Nokia, Aalborg, Denmark (erstwhile Nokia Siemens Networks). From July 2008 to January 2012, he was a Senior Engineer with Ericsson/ST-Ericsson, Lund, Sweden, for 4G LTE baseband/physical layer algorithms and simulations. He founded SLK Consultancy AB, Sweden, in January 2012, where he was an LTE/LTE-A Physical Layer Algorithm Consultant at Huawei, Lund, from January 2012 to March 2014. Since March 2014, he has been with Ericsson AB, Stockholm, Sweden.

Dr. Kant holds more than a dozen patents (issued and pending). His current research interests include (large-scale) optimization, machine learning, signal processing, and their applications to wireless communications.

 

José Mairton B. da Silva, Jr. (Member, IEEE) received the B.Sc. (Hons.) and M.Sc. degrees in telecommunications engineering from the Federal University of Ceará, Brazil, in 2012 and 2014, respectively.  He received the Ph.D. degree from KTH Royal Institute of Technology, Stockholm, Sweden, in 2019.

He is currently an Assistant Professor in the Division of Computer Systems at Uppsala University, Sweden. He was a postdoctoral researcher at KTH Royal Institute of Technology, Stockholm, Sweden, between 2019-2021. He was a Marie Skłodowska-Curie Postdoctoral Fellow with Princeton University, USA, and KTH Royal Institute of Technology, Sweden, between 2022-2023.

Dr. Mairton B. da Silva, Jr. has served as the Secretary of the IEEE Communications Society Emerging Technology Initiative on Full Duplex Communications between 2018-2021. He has been involved in the organization of many IEEE conferences and workshops, including co-chairing ICMLCN 2024, SECON 2022-2023. He gave several tutorials at many IEEE flagship conferences, including ICASSP, PIMRC, ICC, and GLOBECOM. His research interests include distributed machine learning and optimization over wireless communications.