Author: Kang, Jiawen; Xiong, Zehui; Jiang, Chunxiao; Liu, Yi; Guo, Song; Zhang, Yang; Niyato, Dusit; Leung, Cyril; Miao, Chunyan
Title: Scalable and Communication-efficient Decentralized Federated Edge Learning with Multi-blockchain Framework Cord-id: fmoffpm8 Document date: 2020_8_10
ID: fmoffpm8
Snippet: The emerging Federated Edge Learning (FEL) technique has drawn considerable attention, which not only ensures good machine learning performance but also solves"data island"problems caused by data privacy concerns. However, large-scale FEL still faces following crucial challenges: (i) there lacks a secure and communication-efficient model training scheme for FEL; (2) there is no scalable and flexible FEL framework for updating local models and global model sharing (trading) management. To bridge
Document: The emerging Federated Edge Learning (FEL) technique has drawn considerable attention, which not only ensures good machine learning performance but also solves"data island"problems caused by data privacy concerns. However, large-scale FEL still faces following crucial challenges: (i) there lacks a secure and communication-efficient model training scheme for FEL; (2) there is no scalable and flexible FEL framework for updating local models and global model sharing (trading) management. To bridge the gaps, we first propose a blockchain-empowered secure FEL system with a hierarchical blockchain framework consisting of a main chain and subchains. This framework can achieve scalable and flexible decentralized FEL by individually manage local model updates or model sharing records for performance isolation. A Proof-of-Verifying consensus scheme is then designed to remove low-quality model updates and manage qualified model updates in a decentralized and secure manner, thereby achieving secure FEL. To improve communication efficiency of the blockchain-empowered FEL, a gradient compression scheme is designed to generate sparse but important gradients to reduce communication overhead without compromising accuracy, and also further strengthen privacy preservation of training data. The security analysis and numerical results indicate that the proposed schemes can achieve secure, scalable, and communication-efficient decentralized FEL.
Search related documents:
Co phrase search for related documents- long lstm short term memory and lstm short term memory: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73
- long lstm short term memory and machine learning approach: 1, 2, 3, 4, 5
- long lstm short term memory and machine learning technique: 1, 2, 3, 4, 5
- long lstm short term memory and machine learning technology: 1
- loss function and low quality: 1, 2, 3, 4, 5
- loss function and lstm short term memory: 1, 2, 3, 4
- loss function and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23
- loss function and machine learning approach: 1, 2, 3
- loss function and machine learning technique: 1
- loss function and machine vector: 1, 2, 3
- low quality and machine learn: 1
- low quality and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
- low quality and machine learning approach: 1
- low quality and machine vector: 1
- lstm short term memory and machine learning: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46
- lstm short term memory and machine learning approach: 1, 2, 3, 4, 5
- lstm short term memory and machine learning technique: 1, 2, 3, 4, 5
- lstm short term memory and machine learning technology: 1
- lstm short term memory and machine vector: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
Co phrase search for related documents, hyperlinks ordered by date