Mathematical Programming

Optimization

Constrained Optimization

  • Optimization is usually bounded. There is a taxonomy of that differ by:

    • type of objective
      • linear
      • non-linear
        • quadratic
    • type of constraints
      • hard
      • soft (optional with penalty)
      • equality
      • inequality
    • number of variables and constraints
    • Solving method
      • slack variables
        • (convert inequality to equality)
        • penalty method with constraints
    • optimal solution or approximation (global or local extrema)
      • convexity
      • differentiability
  • Note that there are similarities and this grouping need not be mutually exclusive

  • Each method has it’s own inductive bias.

Caution: Boys at work ahead

Two variables, one constraint

Linear Programming

  • Linear objective, linear constraints
  • Simplex method: force into max, all constraints into equalities
    • interior point method

Non-linear programming

  • any objective (linear or non-linear), any constraints (linear or non-linear)

Quadratic programming

  • Quadratic objective, linear constraints are hard, rest are soft

Lagrange Multipliers

  • The multipliers is a general method that involves converting an inequality to an equality

There are some very interesting parallels between machine learning and optimisation, which is unsurprising since ML stands on the shoulders of optimisation. Notably finding global optima is hard, and adding constraints can make it easier to find the optima but limiting the solutions. Some call this inductive bias in ML. But it can also make things harder.

KKT for inequalities

References

[1] https://en.wikipedia.org/wiki/Constrained_optimization [2] Classic Numerical Optimization textbook

Like this post? Subscribe for more.

Kevin Chow
Kevin Chow
Fledging Computer Scientist
comments powered by Disqus
Next
Previous