Abstract
In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.
Original language | English |
---|---|
Article number | 485 |
Number of pages | 10 |
Journal | Entropy |
Volume | 24 |
Issue number | 4 |
DOIs | |
Publication status | Published - 30 Mar 2022 |
Bibliographical note
Funding Information:The authors have received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 725731).
Publisher Copyright:
© 2022 by the authors. Licensee MDPI, Basel, Switzerland.
Keywords
- Bayesian learning
- information-theoretic analysis
- vertical federated learning
ASJC Scopus subject areas
- Information Systems
- Mathematical Physics
- Physics and Astronomy (miscellaneous)
- Electrical and Electronic Engineering