Raghu Pasupathy (Purdue University)

Event Date: 

Wednesday, February 22, 2017 - 3:30pm to 5:00pm

Event Date Details: 

Refreshments served at 3:15pm

Event Location: 

  • Sobel Seminar Room
  • Department Seminar Series

Abstract: We consider the question of unconstrained optimization in certain settings, e.g., stochastic optimization, where the objective function f is unknown but can be estimated using an inexact oracle such as quasi-Monte Carlo (QMC) or numerical quadrature. The inexact oracle is assumed to yield function estimates whose error decays with an increase in oracle effort. For solving such optimization problems, we present the adaptive gradient method (AGM) in two flavors called Constant Step AGM and Backtracking Line Search AGM. AGM's salient feature is the adaptive manner in which it constructs gradient estimates — since gradient information is unavailable, AGM estimates the gradient adaptively, by exerting (only) enough oracle effort to keep the error in the gradient estimate in "lock-step" with the true gradient. We show that AGM's iterates converge to a stationary point of f under minimal conditions. We also prove two sets of results on AGM's work complexity. (i) When f is twice differentiable with Lipschitz first derivative, AGM's work complexity is shown to be arbitrarily close to O(ε-2 - (1 / μ(α))), where μ(α) is the error decay rate of the gradient estimate expressed as a function of the error decay rate α of the objective function estimate. (ii) When f is twice differentiable and strongly convex with Lipschitz first derivative, AGM's work complexity is shown to be arbitrarily close to O(ε-1/μ(α)). The corresponding rates in (i) and (ii) when an exact oracle for f and its gradient is available are known to be O(ε-2) and O(-log ε) respectively. We illustrate the calculation of α and μ(α) for common choices, e.g., QMC with finite difference gradients; we also illustrate AGM through a numerical example.