Abstract
Learning-to-optimize is an emerging framework that leverages training data to speed up the solution of certain optimization problems. One such approach is based on the classical mirror descent algorithm, where the mirror map is modelled using input-convex neural networks. In this work, we extend this functional parameterization approach by introducing momentum into the iterations, based on the classical accelerated mirror descent. Our approach combines short-time accelerated convergence with stable long-time behavior. We empirically demonstrate additional robustness with respect to multiple parameters on denoising and deconvolution experiments.
Original language | English |
---|---|
Title of host publication | ICASSP 2023 - IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) |
Publisher | IEEE |
ISBN (Electronic) | 9781728163277 |
ISBN (Print) | 9781728163284 |
DOIs | |
Publication status | E-pub ahead of print - 5 May 2023 |
Event | 2023 IEEE International Conference on Acoustics, Speech and Signal Processing - Rhodes Island, Greece Duration: 4 Jun 2023 → 10 Jun 2023 |
Conference
Conference | 2023 IEEE International Conference on Acoustics, Speech and Signal Processing |
---|---|
Abbreviated title | ICASSP 2023 |
Country/Territory | Greece |
Period | 4/06/23 → 10/06/23 |