Conferences
The 35th Machine Learning Lab Doctoral Series Forum: Explicit Superlinear Convergence Rates of Quasi-Newton Methods

Abstract: Quasi-Newton methods have a reputation as the most efficient numerical schemes for smooth unconstrained optimization due to the attractive feature of their superlinear convergence. However, the rates in previous work are asymptotic until Rodomanov and Nesterov (2021a, b, c) gave explicit superlinear rates for greedy and standard quasi-Newton methods. In this talk, we will introduce our two subsequent works following former significant research. The first work improves the theory of greedy quasi-newton methods by showing condition-number-free superlinear convergence rates. The second work gives explicit superlinear rates for modified standard SR1.