# Simple function minimization¶

Let’s minimize the following simple function:

$f(x, y) = (x - 3.6)^2 + 2 (y+1.1)^2$

The global minimum is obvious: $$f=0$$ for $$x=3.6, y=-1.1$$. We now want to find this minimum using iwopy, in a setup as follows:

• Parameter bounds: $$x, y \in [-5,5]$$

• Unconstrained minimization

• Gradient based optimizer IPOPT from pygmo

The usage of finite difference gradients will be demonstrated directly after. We start by importing the required classes from the iwopy package:

In :

from iwopy import SimpleProblem, SimpleObjective
from iwopy.interfaces.pygmo import Optimizer_pygmo


The SimpleProblem will be instantiated with two float type variables x and y which will be passed on to all linked objectives and constraints. The SimpleObjective (and also the SimpleConstraint classes assume the same variables as the problem, in the same order. We can therefore implement the above function $$f(x, y)$$ and the derivative

$\begin{split}g(v, x, y) = \left\{ \begin{array}{ll} \mathrm df / \mathrm dx \ ,& \text{if} \ v = 0 \\ \mathrm df / \mathrm dy \ ,& \text{if} \ v = 1 \end{array} \right.\end{split}$

in a straight forward manner:

In :

class MinFunc(SimpleObjective):
def f(self, x, y):
return (x - 3.6) ** 2 + 2 * (y + 1.1) ** 2

def g(self, v, x, y, components):
return 2 * (x - 3.6) if v == 0 else 4 * (y + 1.1)


Notice that the components argument of the function g is not used here since f is a single-component function.

In the case of multi-component functions the parameter n_components has to be passed to the __init__ function of SimpleObjective, and f has to return a list of scalars. In that case also g has to return a list of results, with same length as the requested components. The same rules apply to classes derived from SimpleConstraint.

We can now proceed and setup the problem:

In :

problem = SimpleProblem(
name="minf",
float_vars=["x", "y"],
init_values_float=[0.0, 0.0],
min_values_float=[-5.0, -5.0],
max_values_float=[5.0, 5.0],
)
problem.initialize()

Problem 'minf' (SimpleProblem): Initializing
--------------------------------------------
n_vars_int   : 0
n_vars_float : 2
--------------------------------------------
n_objectives : 1
n_obj_cmptns : 1
--------------------------------------------
n_constraints: 0
n_con_cmptns : 0
--------------------------------------------


Note that in a similar way you can easily add constraints to the problem, by defining them in a class that is derived from iwopy.Constraint or iwopy.SimpleConstraint and then adding them via problem.add_constraint(...).

Adding additional objectives works in the same way, simply repeat problem.add_objective(...) as often as you want. However, be aware that not all optimizers can handle multiple objective cases.

Next, we create and initialize the solver:

In :

solver = Optimizer_pygmo(
problem,
problem_pars=dict(),
algo_pars=dict(type="ipopt", tol=1e-4),
)
solver.initialize()
solver.print_info()


Algorithm name: Ipopt: Interior Point Optimization [deterministic]
C++ class name: pagmo::ipopt

Extra info:
Last optimisation return code: Solve_Succeeded (value = 0)
Verbosity: 1
Individual selection policy: best
Individual replacement policy: best
Numeric options: {tol : 0.0001}



Here tol is an IPOPT parameter that defines the convergence tolerance. Now we are finally ready - let’s solve the problem!

In :

results = solver.solve()
solver.finalize(results)


******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
******************************************************************************

objevals:        objval:      violated:    viol. norm:
1          15.38              0              0
2        9.23375              0              0
3        9.23375              0              0
4        9.23375              0              0
5        9.23375              0              0
6        9.23375              0              0
7        1.60594              0              0
8      0.0440281              0              0
9     0.00157443              0              0
10    1.35988e-06              0              0

Problem name: minf
C++ class name: pybind11::object

Global dimension:                       2
Integer dimension:                      0
Fitness dimension:                      1
Number of objectives:                   1
Equality constraints dimension:         0
Inequality constraints dimension:       0
Lower bounds: [-5, -5]
Upper bounds: [5, 5]
Has batch fitness evaluation: false

Has hessians: false
User implemented hessians sparsity: false

Fitness evaluations: 14

Population size: 1

List of individuals:
#0:
ID:                     3860071275189419792
Decision vector:        [3.6, -1.1]
Fitness vector:         [1.28522e-12]

Champion decision vector: [3.6, -1.1]
Champion fitness: [1.28522e-12]

Optimizer_pygmo: Optimization run finished
Success: True
Best f = 1.2852212531163044e-12
11    1.28522e-12              0              0

In :

print(results)

Results problem 'minf':
------------------------
Float variables:
0: x = 3.599999e+00
1: y = -1.100000e+00
------------------------
Objectives:
0: f = 1.285221e-12
------------------------
Success: True
------------------------



The results object is an instance of the class OptResults, cf. API. it carries attributes

• vars_int: The integer variables of the solution

• vars_float: The float variables of the solution

• objs: The objective function values of the solution

• cons: The constraint function values of the solution

• problem_results: The object returned by the problem when applying the solution

• success : Boolean flag indicating a successful solution

• vnames_int: The names of the integer variables

• vnames_float: The names of the float variables

• onames: The names of the objectives (all components)

• cnames: The names of the constraints (all components)

Next, we want to explore finite difference gradient calculations for the same example. We can either remove the function g(v, x, y, components) from the MinFunc class, or create it invoking the parameter has_ana_derivs=False. This will then ignore the analytical gradient definition in the class.

Additionally, we need to wrap the problem into a wrapper that provides numerical derivatives. There are two choices:

• DiscretizeRegGrid: A wrapper that evaluates the problem on a regular grid and calculates derivatives based on fixed grid point locations. Interpolation in between is optional (and increases the number of required function evaluations). The grid also offers a memory such that re-calculations of grid point results are minimized.

• LocalFD: A wrapper that evaluates local finite difference rules for derivative calculations. Hence the problem evaluation points are defined based on the current variables and hence not memorized since those usually differ with every call.

The LocalFD is usually the faster choice, unless you are expecting a lot of benefits from the memory capabilities of the regular grid. Here is an example how to use it for solving our problem:

In :

from iwopy import LocalFD

In :

problem = SimpleProblem(
name="minf",
float_vars=["x", "y"],
init_values_float=[0.0, 0.0],
min_values_float=[-5.0, -5.0],
max_values_float=[5.0, 5.0],
)

In :

gproblem = LocalFD(problem, deltas=1e-4, fd_order=2)
gproblem.initialize()

Problem 'minf' (SimpleProblem): Initializing
--------------------------------------------
n_vars_int   : 0
n_vars_float : 2
--------------------------------------------
n_objectives : 1
n_obj_cmptns : 1
--------------------------------------------
n_constraints: 0
n_con_cmptns : 0
--------------------------------------------
Problem 'minf_fd' (LocalFD): Initializing
-----------------------------------------
n_vars_int   : 0
n_vars_float : 2
-----------------------------------------
n_objectives : 1
n_obj_cmptns : 1
-----------------------------------------
n_constraints: 0
n_con_cmptns : 0
-----------------------------------------


The gproblem object now corresponds to original problem with finite difference gradients of order 2. We can use the same gradient based solver as above for solving this problem:

In :

solver = Optimizer_pygmo(
gproblem,
problem_pars=dict(),
algo_pars=dict(type="ipopt", tol=1e-4),
)
solver.initialize()

results = solver.solve()
solver.finalize(results)


Optimisation return status: Solve_Succeeded (value = 0)

objevals:        objval:      violated:    viol. norm:
1          15.38              0              0
2        9.23375              0              0
3        9.23375              0              0
4        9.23375              0              0
5        9.23375              0              0
6        9.23375              0              0
7        1.60594              0              0
8      0.0440281              0              0
9     0.00157443              0              0
10    1.35988e-06              0              0
11    1.28522e-12              0              0

Problem name: minf_fd
C++ class name: pybind11::object

Global dimension:                       2
Integer dimension:                      0
Fitness dimension:                      1
Number of objectives:                   1
Equality constraints dimension:         0
Inequality constraints dimension:       0
Lower bounds: [-5, -5]
Upper bounds: [5, 5]
Has batch fitness evaluation: false

Has hessians: false
User implemented hessians sparsity: false

Fitness evaluations: 14

Population size: 1

List of individuals:
#0:
ID:                     4797199643997501839
Decision vector:        [3.6, -1.1]
Fitness vector:         [1.28522e-12]

Champion decision vector: [3.6, -1.1]
Champion fitness: [1.28522e-12]

Optimizer_pygmo: Optimization run finished
Success: True
Best f = 1.285221254342924e-12

In :

print(results)

Results problem 'minf_fd':
---------------------------
Float variables:
0: x = 3.599999e+00
1: y = -1.100000e+00
---------------------------
Objectives:
0: f = 1.285221e-12
---------------------------
Success: True
---------------------------