Gazagnadou, Nidham (2018) Optimal minibatch size for stochastic variance-reduced methods PFE - Project Graduation, ENSTA.
![]()
| PDF 487Kb |
Abstract
In this report, we find simple expressions of the optimal minibatch size for the celebrated stochastic average gradient (SAGA) method. Our approach consists in finding the minibatch size that minimizes the total complexity of the minibatch SAGA algorithm, defined as the iteration complexity times the number of computations per iteration. These complexity results are taken from recent work by Gower, Richtárik and Bach (2018) who introduced the concept of expected smoothness constant. The latter is a characteristic of our optimization problem and controls the iteration complexity of several stochastic variance-reduced gradient methods. In order to get closed-form optimal minibatch, we first designed upper bounds of the expected smoothness constant in the case of SAGA for sampling without replacement, using direct computations and matrix concentration inequalities. Then, we plugged in these upper bounds into previous complexity results, in a way leading to a pessimistic total complexity, before finding their argument of the minimum: our estimation of the optimal minibatch size.
Item Type: | Thesis (PFE - Project Graduation) |
---|---|
Uncontrolled Keywords: | optimal minibatch |
Subjects: | Mathematics and Applications |
ID Code: | 7183 |
Deposited By: | Nidham Gazagnadou |
Deposited On: | 27 nov. 2018 10:57 |
Dernière modification: | 27 nov. 2018 10:57 |
Repository Staff Only: item control page