Sharper generalization bounds for learning with gradient-dominated objective functions

Yunwen Lei, Yiming Ying

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Stochastic optimization has become the workhorse behind many successful machine learning applications, which motivates a lot of theoretical analysis to understand its empirical behavior. As a comparison, there is far less work to study the generalization behavior especially in a non-convex learning setting. In this paper, we study the generalization behavior of stochastic optimization by leveraging the algorithmic stability for learning with β-gradient-dominated objective functions. We develop generalization bounds of the order O(1/nβ)) plus the convergence rate of the optimization algorithm, where n is the sample size. Our stability analysis significantly improves the existing non-convex analysis by removing the bounded gradient assumption and implying better generalization bounds. We achieve this improvement by exploiting the smoothness of loss functions instead of the Lipschitz condition in Charles & Papailiopoulos (2018). We apply our general results to various stochastic optimization algorithms, which show clearly how the variance-reduction techniques improve not only training but also generalization. Furthermore, our discussion explains how interpolation helps generalization for highly expressive models.
Original languageEnglish
Title of host publicationInternational Conference on Learning Representations
Subtitle of host publicationICLR 2021
PublisherOpenReview.net
Pages1-23
Number of pages23
Publication statusPublished - 16 Jan 2021
EventThe Ninth International Conference on Learning Representations - Virtual
Duration: 3 May 20217 May 2021

Conference

ConferenceThe Ninth International Conference on Learning Representations
Abbreviated titleICLR 2021
Period3/05/217/05/21

Keywords

  • generalization bounds
  • non-convex learning

Fingerprint

Dive into the research topics of 'Sharper generalization bounds for learning with gradient-dominated objective functions'. Together they form a unique fingerprint.

Cite this