A Reliable Aggregation Method Based on Threshold Additive Secret Sharing in Federated Learning with Quality of Service (QoS) Support

Yu Ting Ting, Po Wen Chi*, Chien Ting Kuo

*此作品的通信作者

研究成果: 雜誌貢獻期刊論文同行評審

摘要

Federated learning is a decentralized privacy-preserving mechanism that allows multiple clients to collaborate without exchanging their datasets. Instead, they jointly train a model by uploading their own gradients. However, recent research has shown that attackers can use clients’ gradients to reconstruct the original training data, compromising the security of federated learning. Thus, there has been an increasing number of studies aiming to protect gradients using different techniques. One common technique is secret sharing. However, it has been shown in previous research that when using secret sharing to protect gradient privacy, the original gradient cannot be reconstructed when one share is lost or a server is damaged, causing federated learning to be interrupted. In this paper, we propose an approach that involves using additive secret sharing for federated learning gradient aggregation, making it difficult for attackers to easily access clients’ original gradients. Additionally, our proposed method ensures that any server damage or loss of gradient shares are unlikely to impact the federated learning operation, within a certain probability. We also added a membership level system, allowing members of varying levels to ultimately obtain models with different accuracy levels.

原文英語
文章編號8959
期刊Applied Sciences (Switzerland)
14
發行號19
DOIs
出版狀態已發佈 - 2024 10月

ASJC Scopus subject areas

  • 一般材料科學
  • 儀器
  • 一般工程
  • 製程化學與技術
  • 電腦科學應用
  • 流體流動和轉移過程

指紋

深入研究「A Reliable Aggregation Method Based on Threshold Additive Secret Sharing in Federated Learning with Quality of Service (QoS) Support」主題。共同形成了獨特的指紋。

引用此