E-mail senden E-Mail Adresse kopieren
2026-01-26

Never Saddle Down for Reparameterized Steepest Descent as Mirror Flow

Zusammenfassung

How does the choice of optimization algorithm shape a model’s ability to learn features? To address this question for steepest descent methods \textemdash including sign descent, which is closely related to Adam \textemdash we introduce steepest mirror flows as a unifying theoretical framework. This framework reveals how optimization geometry governs learning dynamics, implicit bias, and sparsity and it provides two explanations for why Adam and AdamW often outperform SGD in fine-tuning. Focusing on diagonal linear networks and deep diagonal linear reparameterizations (a simplified proxy for attention), we show that steeper descent facilitates both saddle-point escape and feature learning. In contrast, gradient descent requires unrealistically large learning rates to escape saddles, an uncommon regime in fine-tuning. Empirically, we confirm that saddle-point escape is a central challenge in fine-tuning. Furthermore, we demonstrate that decoupled weight decay, as in AdamW, stabilizes feature learning by enforcing novel balance equations. Together, these results highlight two mechanisms how steepest descent can aid modern optimization.

Konferenzbeitrag

International Conference on Learning Representations (ICLR)

Veröffentlichungsdatum

2026-01-26

Letztes Änderungsdatum

2026-01-26