Skip to content

Proximal gradient algorithm for convex optimization, using a diagonal +/- rank-1 norm

License

Notifications You must be signed in to change notification settings

saravkin/zeroSR1

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

zeroSR1 toolbox

The zeroSR1 toolbox implements the algorithm from 'A quasi-Newton proximal splitting method' by Stephen Becker and Jalal Fadili, which appeared in NIPS 2012. The paper is available at arXiv 1206.1156.

Briefly, the algorithm follows the standard proximal-gradient method, but allows a scaled prox. This enables us to use a limited-memory SR1 method (similar to L-BFGS).

The algorithm solves problems of the form min_x f(x) + h(x) where f is differentiable (more precisely, with a Lipschitz gradient) and h is one of the following (see the paper):

Available "h" Cost for input of size "n"
l1 norm O( n log n)
non-negativity constraints O( n log n)
l1 and non-negativity O( n log n)
box constraints O( n log n )
l_infinity norm constraint O( n log n )
hinge loss O( n log n )

The algorithm compares favorably with other methods, including L-BFGS-B.

This toolbox currently implements in the following languages

  • Matlab
  • Octave

Further releases may target these languages:

  • Python
  • R
  • C++

Installation

For Matlab, there is no installation necessary. Every time you run a new Matlab session, run the setup_zeroSR1.m file and it will add the correct paths.

Run tests/test_solver_simple.m to see how to solve a typical problem

Structure

In each folder, see they Contents.m file for more information

Algorithms

This includes the zeroSR1 algorithm as well as implemenations of FISTA and other proximal-gradient methods

Proxes

The scaled diagonal+ rank1 prox operators for various "g" functions

SmoothFunctions

These are pre-made wrappers for the various smooth "f" functions. The files here with the _splitting suffix are intended for use with any method that requires forming the augmented variable "x_aug = (x_pos, x_neg)". For example, this approach is used when using L-BFGS-B (which only allows box constraints, such as x_pos >= 0, x_neg <= 0) to solve the LASSO problem.

Utilities

Helper files

Tests

Verify the algorithm and proxes are working correctly. This uses CVX to verify; if this is not installed on your system, then it relies on precomputed solutions stored in a subdirectory.

Experiments

Recreate tests of several algorithms, or do something more interesting than the scripts in tests

Authors

The original authors are Stephen Becker and Jalal Fadili. Further contributions are welcome.

About

Proximal gradient algorithm for convex optimization, using a diagonal +/- rank-1 norm

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • MATLAB 100.0%