Conditional mutual information-based generalization bound for meta learning

Arezou Rezazadeh, Sharu Jose, Giuseppe Durisi, Osvaldo Simeone

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Meta-learning optimizes an inductive bias—typically in the form of the hyperparameters of a base-learning algorithm—by observing data from a finite number of related tasks. This paper presents an information-theoretic bound on the generalization performance of any given meta-learner, which builds on the conditional mutual information (CMI) framework of Steinke and Zakynthinou (2020). In the proposed extension to meta-learning, the CMI bound involves a training meta-supersample obtained by first sampling 2N independent tasks from the task environment, and then drawing 2M independent training samples for each sampled task. The meta-training data fed to the meta-learner is modelled as being obtained by randomly selecting N tasks from the available 2N tasks and M training samples per task from the available 2M training samples per task. The resulting bound is explicit in two CMI terms, which measure the information that the meta-learner output and the base-learner output provide about which training data are selected, given the entire meta-supersample. Finally, we present a numerical example that illustrates the merits of the proposed bound in comparison to prior information-theoretic bounds for meta-learning.
Original languageEnglish
Title of host publication2021 IEEE International Symposium on Information Theory (ISIT)
PublisherIEEE
Pages1176-1181
Number of pages6
DOIs
Publication statusPublished - 1 Sept 2021

Publication series

NameIEEE International Symposium on Information Theory proceedings
PublisherIEEE
ISSN (Print)2157-8095
ISSN (Electronic)2157-8117

Keywords

  • Training
  • Deep learning
  • Adaptation models
  • Training data
  • Data models
  • Task analysis
  • Mutual information

Fingerprint

Dive into the research topics of 'Conditional mutual information-based generalization bound for meta learning'. Together they form a unique fingerprint.

Cite this