Robust Data-Driven Accelerated Mirror Descent

Hong Ye Tan, Subhadip Mukherjee, Junqi Tang, Andreas Hauptmann, Carola-Bibiane Schönlieb

Research output: Chapter in Book/Report/Conference proceedingConference contribution

19 Downloads (Pure)

Abstract

Learning-to-optimize is an emerging framework that leverages training data to speed up the solution of certain optimization problems. One such approach is based on the classical mirror descent algorithm, where the mirror map is modelled using input-convex neural networks. In this work, we extend this functional parameterization approach by introducing momentum into the iterations, based on the classical accelerated mirror descent. Our approach combines short-time accelerated convergence with stable long-time behavior. We empirically demonstrate additional robustness with respect to multiple parameters on denoising and deconvolution experiments.
Original languageEnglish
Title of host publicationICASSP 2023 - IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherIEEE
ISBN (Electronic)9781728163277
ISBN (Print)9781728163284
DOIs
Publication statusE-pub ahead of print - 5 May 2023
Event2023 IEEE International Conference on Acoustics, Speech and Signal Processing - Rhodes Island, Greece
Duration: 4 Jun 202310 Jun 2023

Conference

Conference2023 IEEE International Conference on Acoustics, Speech and Signal Processing
Abbreviated titleICASSP 2023
Country/TerritoryGreece
Period4/06/2310/06/23

Fingerprint

Dive into the research topics of 'Robust Data-Driven Accelerated Mirror Descent'. Together they form a unique fingerprint.

Cite this