0%

Introduction to Numerical Optimization

Optimization

Our goals is to find values of the variables that optimize the objectives. Often the variables are restricted, or constrained, in some way.

There is no universal optimization algorithm but rather a collection of algorithms, each of which is tailored to a particular type of optimization problem.

  • optimality conditions: checking that the current set of variables is indeed the solution of the problem
  • sensitivity analysis: reveals the sensitivity of the solution to changes in the model and data.

Mathematical Formulation

We introduce some notations first:

  • x is the vector of variables, also called unknowns or parameters
  • f is the objective function, a (scalar) function of x that we want to maximize or minimize
  • $c_i\ $are constraint functions, which are scalar functions of x that define equations and inequalities that the unknown vector x must satisfy

Optimization problem can be written as follows by using the notations above:

Here $\ \mathcal{I}\ $and $\ \mathcal{E}\ $are sets of indexes for equality and inequality constraints, respectively.

  • feasible region: the set of points satisfying all the constraints

Continuous versus Discrete Optimization

discrete optimization

The defining feature of a discrete optimization problem is that the unknown x is drawn from a finite (but often very large) set.

continuous optimization

The feasible set is usually uncountably infinite, as when the components of x are allowed to be real numbers.

Continuous optimization problems are normally easier to solve because the smoothness of the functions makes it possible to use objective and constraint information at a particular point x to deduce information about the function’s behavior at all points close to x.

Continuous optimization techniques often play an important role in solving discrete optimization problems.

Constrained and Unconstrained Optimization

unconstrained optimization

  • $\mathcal{I} = \mathcal{E} = \emptyset$
  • replacement the constraints by the penalization terms added to objective function

constrained optimization

Stochastic and Deterministic Optimization

Stochastic optimization algorithms use these quantifications of the uncertainty to produce solutions that optimize the expected performance of the model.

  • chance-constrained optimization: in which we ensure that the variables satisfy the given constraints to some specified probability
  • robust optimization: certain constraints are required to hold for all possible values of the uncertain data

Convexity

First of all, we consider convex set.

convex set

When we say a set is convex, it follows that for $\forall x \in S$ and $y\in S$, we have

convex function

  • convex function: if its domain S is a convex set and if for any two points x and y in S, the following property is satisfied

  • strictly convex function: whenever $x \ne y$

  • convex function

global solution

If the objective function in the optimization problem and the feasible region are both convex, then any local solution of the problem is in fact a global solution.

convex programming

A special case of the general constrained optimization problem in which

  • the objective function is convex
  • the equality constrain function $c_i(\cdot), i\in\mathcal{E}$, are linear
  • the inequality constrain function $c_i(\cdot), i\in\mathcal{I}$, are concave

Optimization Algorithm

Most strategies make use of the values of the objective function f, $c_i$, the constraint functions, and possibly the first and second derivatives of these functions. Some algorithms accumulate information gathered at previous iterations, while others use only local information obtained at the current point.

Aspects of judging whether the algorithm is a good or not:

  • Robustness
  • Efficiency
  • Accuracy

Reference

Numerical Optimization (2nd) by Jorge Nocedal, Stephen J. Wright