A Hybrid Architecture for Federated and Centralized Learning

dc.authoridElbir, Ahmet M./0000-0003-4060-3781
dc.authoridChatzinotas, Symeon/0000-0001-5122-0001
dc.authoridPapazafeiropoulos, Anastasios/0000-0003-1841-6461
dc.authoridCöleri, Sinem/0000-0002-7502-3122
dc.authoridKourtessis, Pandelis/0000-0003-3392-670X
dc.authorwosidElbir, Ahmet M./X-3731-2019
dc.authorwosidChatzinotas, Symeon/D-4191-2015
dc.authorwosidCöleri, Sinem/O-9829-2014
dc.contributor.authorElbir, Ahmet M.
dc.contributor.authorCöleri, Sinem
dc.contributor.authorPapazafeiropoulos, Anastasios K.
dc.contributor.authorKourtessis, Pandelis
dc.contributor.authorChatzinotas, Symeon
dc.date.accessioned2023-07-26T11:53:50Z
dc.date.available2023-07-26T11:53:50Z
dc.date.issued2022
dc.departmentDÜ, Mühendislik Fakültesi, Elektrik-Elektronik Mühendisliği Bölümüen_US
dc.description.abstractMany of the machine learning tasks rely on centralized learning (CL), which requires the transmission of local datasets from the clients to a parameter server (PS) entailing huge communication overhead. To overcome this, federated learning (FL) has been suggested as a promising tool, wherein the clients send only the model updates to the PS instead of the whole dataset. However, FL demands powerful computational resources from the clients. In practice, not all the clients have sufficient computational resources to participate in training. To address this common scenario, we propose a more efficient approach called hybrid federated and centralized learning (HFCL), wherein only the clients with sufficient resources employ FL, while the remaining ones send their datasets to the PS, which computes the model on behalf of them. Then, the model parameters are aggregated at the PS. To improve the efficiency of dataset transmission, we propose two different techniques: i) increased computation-per-client and ii) sequential data transmission. Notably, the HFCL frameworks outperform FL with up to 20% improvement in the learning accuracy when only half of the clients perform FL while having 50% less communication overhead than CL since all the clients collaborate on the learning process with their datasets.en_US
dc.description.sponsorshipERC project AGNOSTIC; CHIST-ERA grant [CHIST-ERA-18-SDCDN-001]; Scientific and Technological Council of Turkey [119E350]en_US
dc.description.sponsorshipThis work was supported in part by the ERC project AGNOSTIC, and by CHIST-ERA grant CHIST-ERA-18-SDCDN-001 and the Scientific and Technological Council of Turkey 119E350. A preliminary work of this article was presented in 2021 European Signal Processing Conference (EUSIPCO) [1] [DOI: 10.23919/EUSIPCO54536.2021.9616120].en_US
dc.identifier.doi10.1109/TCCN.2022.3181032
dc.identifier.endpage1542en_US
dc.identifier.issn2332-7731
dc.identifier.issue3en_US
dc.identifier.scopus2-s2.0-85131765961en_US
dc.identifier.scopusqualityQ1en_US
dc.identifier.startpage1529en_US
dc.identifier.urihttps://doi.org/10.1109/TCCN.2022.3181032
dc.identifier.urihttps://hdl.handle.net/20.500.12684/12620
dc.identifier.volume8en_US
dc.identifier.wosWOS:000852215200020en_US
dc.identifier.wosqualityQ1en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US
dc.institutionauthorElbir, Ahmet M.
dc.language.isoenen_US
dc.publisherIeee-Inst Electrical Electronics Engineers Incen_US
dc.relation.ispartofIeee Transactions on Cognitive Communications and Networkingen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.snmz$2023V1Guncelleme$en_US
dc.subjectMachine Learning; Federated Learning; Centralized Learning; Edge Intelligence; Edge Efficiencyen_US
dc.subjectResource-Allocation; Intelligence; Designen_US
dc.titleA Hybrid Architecture for Federated and Centralized Learningen_US
dc.typeArticleen_US

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
12620.pdf
Boyut:
5.78 MB
Biçim:
Adobe Portable Document Format
Açıklama:
Tam Metin / Full Text