An information-theoretic analysis of the cost of decentralization for learning and inference under privacy constraints

Sharu Theresa Jose, Osvaldo Simeone

Research output: Contribution to journalArticlepeer-review

18 Downloads (Pure)

Abstract

In vertical federated learning (FL), the features of a data sample are distributed across multiple agents. As such, inter-agent collaboration can be beneficial not only during the learning phase, as is the case for standard horizontal FL, but also during the inference phase. A fundamental theoretical question in this setting is how to quantify the cost, or performance loss, of decentralization for learning and/or inference. In this paper, we study general supervised learning problems with any number of agents, and provide a novel information-theoretic quantification of the cost of decentralization in the presence of privacy constraints on inter-agent communication within a Bayesian framework. The cost of decentralization for learning and/or inference is shown to be quantified in terms of conditional mutual information terms involving features and label variables.

Original languageEnglish
Article number485
Number of pages10
JournalEntropy
Volume24
Issue number4
DOIs
Publication statusPublished - 30 Mar 2022

Bibliographical note

Funding Information:
The authors have received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Programme (Grant Agreement No. 725731).

Publisher Copyright:
© 2022 by the authors. Licensee MDPI, Basel, Switzerland.

Keywords

  • Bayesian learning
  • information-theoretic analysis
  • vertical federated learning

ASJC Scopus subject areas

  • Information Systems
  • Mathematical Physics
  • Physics and Astronomy (miscellaneous)
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'An information-theoretic analysis of the cost of decentralization for learning and inference under privacy constraints'. Together they form a unique fingerprint.

Cite this