The technology we use, and even rely on, in our everyday lives –computers, radios, video, cell phones – is enabled by signal processing. Learn More »
1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.
Federated learning (FL) has emerged as an instance of distributed machine learning paradigm that avoids the transmission of data generated on the users' side. Although data are not transmitted, edge devices have to deal with limited communication bandwidths, data heterogeneity, and straggler effects due to the limited computational resources of users' devices. A prominent approach to overcome such difficulties is FedADMM, which is based on the classical two-operator consensus alternating direction method of multipliers (ADMM). The common assumption of FL algorithms, including FedADMM, is that they learn a global model using data only on the users' side and not on the edge server. However, in edge learning, the server is expected to be near the base station and has often direct access to rich datasets. In this paper, we argue that it is much more beneficial to leverage the rich data on the edge server then utilizing only user datasets. Specifically, we show that the mere application of FL with an additional virtual user node representing the data on the edge server is inefficient. We propose FedTOP-ADMM, which generalizes FedADMM and is based on a three-operator ADMM-type technique that exploits a smooth cost function on the edge server to learn a global model in parallel to the edge devices. Our numerical experiments indicate that FedTOP-ADMM has substantial gain up to 33% in communication efficiency to reach a desired test accuracy with respect to FedADMM, including a virtual user on the edge server.
Centralized training of machine learning models becomes prohibitive for a large number of users, particularly if the users — also known as clients or agents or workers — have to share a large dataset with the central server. Furthermore, sharing a dataset with the central server may not be feasible for some users due to privacy concerns. Therefore, training algorithms using distributed and decentralized approaches are preferred. This has led to the concept of federated learning (FL), which results from the synergy between large-scale distributed optimization techniques and machine learning. Consequently, FL has received considerable attention in the last few years since its introduction in [1], [2].
Home | Sitemap | Contact | Accessibility | Nondiscrimination Policy | IEEE Ethics Reporting | IEEE Privacy Policy | Terms | Feedback
© Copyright 2024 IEEE – All rights reserved. Use of this website signifies your agreement to the IEEE Terms and Conditions.
A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.