-
Notifications
You must be signed in to change notification settings - Fork 22
Numerical Optimization
Key introductory reading material is the Palgrave entry on Numerical optimization methods by Karl Schmedders. Next read chapter 4 on optimization from Ken Judd's Numerical Methods for Economists. The reference textbook for optimization is 2nd edition of Nocedal and Wright's Numerical Optimization (1st edition is now available as a pdf).
Locally and globally convergent Newton's method.
Line search eliminates cycling. What to do when line search fails? For fixed sigma might be no t which satisfies descent direction because of numerical precision. Changing sigma will not fix the problem. Possible solution is to re-scale. Always plot function and perhaps t times gradient of the function. Visually check for descent.
Linearization of potentially non-linear system. Then "solve" the linear system of equations. Direct (i.e., factorization of the Gradient) and indirect methods (i.e., Krylov subspace). Use multi-dimensional version of line-search in order to get globally convergent method.
Problems: If gradient/Jacobian not invertible can't solve the linear system implied by the linearization. If conditional number? of equations is large this is bad.
Don't actually need to solve the linear system implied by the linearization can "perturb" the system and solve the perturbed system of equations. Pre-conditioning is different from perturbation. Perturbation is about fixing rank deficiency...pre-conditioning is about speeding up convergence of an iterative method.
??
For convex functions local minimizers are global minimizers, in general this is not the case! First and second order conditions. Local minimizer requires positive definite Hessian (i.e., strictly positive eigenvals); local maximizer requires negative definite Hessian (i.e., strictly negative eigenvals).
Most non-linear solver are some combination of a local method with fast convergence and a good globalization strategy to insure convergence.
Compute a quadratic approximation to the objective and only "trust" that approximation within some radius of current iterate. Solve the optimization problem exactly within that region. Ingredients of a trust-region method: How do you define the size of the trust region (and update the size of the region adaptively); how do you approximate the Hessian used in the quadratic approximation.
Reduce non-linear optimization problem to a quadratic optimization problem. Why? Quadratic problems (either convex or non-convex) can be solved using linear algebra!
Never transform variables to enforce bound constraint! Use an appropriate solver that is designed to handle bound constraints!
Constrained minimization boils down to finding a value from which there are no feasible descent directions. Need some way to characterize the set of feasible search directions: constraint qualifications?
Linear programming doesn't yield curvature information...which means convergence of non-linear solvers using sequential linear programming will be slow!
Can get quadratic convergence when solving constrained optimization problems by using sequential quadratic programming.
Solve a linear program and the use this information to predict which constraints are actually binding. Solve quadratic programming with equality constraints! Much simpler (and much more efficient!) then solving quadratic programming problem with inequality constraints.
Globalization strategy (i.e., to get global convergence) need to use either line search or trust region methods.
- If new iterate decreases both objective and is feasible: ACCEPT!
- If new iterate increases both objective and is infeasible: REJECT!
- If new iterate decrease objective but violates constraint(s): ??
- If new iterate increases the objective but satisfies constraints: ??
Filter out points that are clearly unacceptable, and then update filter. Can prove global convergence for filtering methods.
Not covered?
??
Non-convex problem (either objective or constraints) use active set method. For convex problems (or for really large problems) use active set methods.
Got a bit lost at this point!
Are the variables scaled accurately?
- If variable/constraint values of 1e-8 to 1e8 matter to you they don't to the computer! Your problem is horribly scaled!
- Is your gradient/Jacobian ill-conditioned? Does it have very big and small entries? If dynamic range is large, this is bad.
- Hessian of Lagrangian has big and small entries?
Column or row scaling is just a linear change of variables. But! Remember that algorithm is now solving the scaled problem. Always check that the solution also solves the original problem. Always plot your solution! Sometimes even good solvers will tell you a solution has been found when it hasn't, and this will be obvious if you plot it!
The A COmmon Optimization Python Repository (Coopr) library. Coopr is a collection of Python software packages that supports a diverse set of optimization capabilities for formulating and analyzing optimization models. A key driver for Coopr development is Pyomo, an open source tool for modeling optimization applications in Python. Pyomo can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. Thus, Pyomo provides a capability that is commonly associated with algebraic modeling languages like AMPL and GAMS. Pyomo also makes use of the Open Source AMPL Solver Library (ASL) which means that if your solver is supported by AMPL you can get automatic differentiation for free when using Coopr and Pyomo. If your university (or research institute) has access to Springer link you can probably download a free copy of [Pyomo - Optimization Modeling in Python] (http://www.springer.com/mathematics/book/978-1-4614-3225-8).
-
General Algebraic Modeling Software (GAMS) has Python bindings. But why pay for GAMS when you can use Pyomo?
-
AMPL can be called directly from within an IPython notebook using iampl. But then why pay for AMPL when you can use Pyomo?
-
Toolkit for Advanced Optimization (TAO): Also has Python bindings! Requires the portable extensible toolkit of scientific computation (PETsc)? Some instructions for installing PETsc and Tao for Python.
-
NEOS: There are Python bindings that allow you to pass optimization problems directly to the NEOS server and retrieve results. NEOS has a Python API. Possible Python module is PyNEOS. I have now actually used PyNEOS to pass a model file to the NEOS server and retrieve the results in Python! Check out the IPython notebook here!