Simple function minimization¶
Let’s minimize the following simple function:
The global minimum is obvious: \(f=0\) for \(x=3.6, y=-1.1\). We now want to find this minimum using iwopy
, in a setup as follows:
Parameter bounds: \(x, y \in [-5,5]\)
Unconstrained minimization
Analytical gradients
The usage of finite difference gradients will be demonstrated directly after. We start by importing the required classes from the iwopy
package:
In [1]:
from iwopy import SimpleProblem, SimpleObjective
from iwopy.interfaces.scipy import Optimizer_scipy
The SimpleProblem
will be instantiated with two float type variables x
and y
which will be passed on to all linked objectives and constraints. The SimpleObjective
(and also the SimpleConstraint
classes assume the same variables as the problem, in the same order. We can therefore implement the above function \(f(x, y)\) and the derivative
in a straight forward manner:
In [2]:
class MinFunc(SimpleObjective):
def f(self, x, y):
return (x - 3.6) ** 2 + 2 * (y + 1.1) ** 2
def g(self, v, x, y, components):
return 2 * (x - 3.6) if v == 0 else 4 * (y + 1.1)
Notice that the components
argument of the function g
is not used here since f
is a single-component function.
In the case of multi-component functions the parameter n_components
has to be passed to the __init__
function of SimpleObjective
, and f
has to return a list of scalars. In that case also g
has to return a list of results, with same length as the requested components
. The same rules apply to classes derived from SimpleConstraint
.
We can now proceed and setup the problem:
In [3]:
problem = SimpleProblem(
name="minf",
float_vars=["x", "y"],
init_values_float=[0.0, 0.0],
min_values_float=[-5.0, -5.0],
max_values_float=[5.0, 5.0],
)
problem.add_objective(MinFunc(problem))
problem.initialize()
Problem 'minf' (SimpleProblem): Initializing
--------------------------------------------
n_vars_int : 0
n_vars_float: 2
--------------------------------------------
n_objectives: 1
n_obj_cmptns: 1
--------------------------------------------
n_constraints: 0
n_con_cmptns: 0
--------------------------------------------
Note that in a similar way you can easily add constraints to the problem, by defining them in a class that is derived from iwopy.Constraint
or iwopy.SimpleConstraint
and then adding them via problem.add_constraint(...)
.
Adding additional objectives works in the same way, simply repeat problem.add_objective(...)
as often as you want. However, be aware that not all optimizers can handle multiple objective cases.
Next, we create and initialize the solver:
In [4]:
solver = Optimizer_scipy(
problem,
scipy_pars=dict(method="SLSQP", tol=1e-8),
)
solver.initialize()
solver.print_info()
Using optimizer memory, size: 100
Scipy parameters:
-----------------
method: SLSQP
tol: 1e-08
Here tol
is an SLSQP
parameter that defines the convergence tolerance. Now we are finally ready - let’s solve the problem!
In [5]:
results = solver.solve()
solver.finalize(results)
Optimizer_scipy: Optimization run finished
Success: True
Best f = 1.66552658619115e-16
In [6]:
print(results)
Results problem 'minf':
------------------------
Float variables:
0: x = 3.600000e+00
1: y = -1.100000e+00
------------------------
Objectives:
0: f = 1.665527e-16
------------------------
Success: True
------------------------
The results
object is an instance of the class OptResults
, cf. API. it carries attributes
vars_int
: The integer variables of the solutionvars_float
: The float variables of the solutionobjs
: The objective function values of the solutioncons
: The constraint function values of the solutionproblem_results
: The object returned by the problem when applying the solutionsuccess
: Boolean flag indicating a successful solutionvnames_int
: The names of the integer variablesvnames_float
: The names of the float variablesonames
: The names of the objectives (all components)cnames
: The names of the constraints (all components)
Next, we want to explore finite difference gradient calculations for the same example. We can either remove the function g(v, x, y, components)
from the MinFunc
class, or create it invoking the parameter has_ana_derivs=False
. This will then ignore the analytical gradient definition in the class.
Additionally, we need to wrap the problem into a wrapper that provides numerical derivatives. There are two choices:
DiscretizeRegGrid
: A wrapper that evaluates the problem on a regular grid and calculates derivatives based on fixed grid point locations. Interpolation in between is optional (and increases the number of required function evaluations). The grid also offers a memory such that re-calculations of grid point results are minimized.LocalFD
: A wrapper that evaluates local finite difference rules for derivative calculations. Hence the problem evaluation points are defined based on the current variables and hence not memorized since those usually differ with every call.
The LocalFD
is usually the faster choice, unless you are expecting a lot of benefits from the memory capabilities of the regular grid. Here is an example how to use it for solving our problem:
In [7]:
from iwopy import LocalFD
In [8]:
problem = SimpleProblem(
name="minf",
float_vars=["x", "y"],
init_values_float=[0.0, 0.0],
min_values_float=[-5.0, -5.0],
max_values_float=[5.0, 5.0],
)
problem.add_objective(MinFunc(problem, has_ana_derivs=False))
In [9]:
gproblem = LocalFD(problem, deltas=1e-4, fd_order=2)
gproblem.initialize()
Problem 'minf' (SimpleProblem): Initializing
--------------------------------------------
n_vars_int : 0
n_vars_float: 2
--------------------------------------------
n_objectives: 1
n_obj_cmptns: 1
--------------------------------------------
n_constraints: 0
n_con_cmptns: 0
--------------------------------------------
Problem 'minf_fd' (LocalFD): Initializing
-----------------------------------------
n_vars_int : 0
n_vars_float: 2
-----------------------------------------
n_objectives: 1
n_obj_cmptns: 1
-----------------------------------------
n_constraints: 0
n_con_cmptns: 0
-----------------------------------------
The gproblem
object now corresponds to original problem
with finite difference gradients of order 2. We can use the same gradient based solver as above for solving this problem:
In [10]:
solver = Optimizer_scipy(
gproblem,
scipy_pars=dict(method="SLSQP", tol=1e-8),
)
solver.initialize()
results = solver.solve()
solver.finalize(results)
Using optimizer memory, size: 100
Optimizer_scipy: Optimization run finished
Success: True
Best f = 1.66552658619115e-16
In [11]:
print(results)
Results problem 'minf_fd':
---------------------------
Float variables:
0: x = 3.600000e+00
1: y = -1.100000e+00
---------------------------
Objectives:
0: f = 1.665527e-16
---------------------------
Success: True
---------------------------