# Electrostatic potential minimization¶

The main purpose of iwopy is to be helpful when attacking more complicated optimization tasks than the minimization of simple analytical functions. As an example, we consider the minimization of an inverse distance type potential for a fully coupled system of $$N$$ particles in two dimensions. We can imagine such a system to be composed of $$N$$ individual point charges, each of them carrying the same electric unit charge. The potential that we want to minimize is then represented by

$\begin{split}V = \sum_{i\neq j} \frac 1 {|\mathrm r_i - \mathrm r_j|} \ , \quad \text{where} \quad \mathrm r_i = \left( \begin{array}{c} x_i \\ y_i \end{array} \right)\end{split}$

denotes the two dimensional position vector of charge $$i$$ and the sum is over all unequal index pairs. This potential favours large distances between charges, hence we need to constrain them to a certain area for a meaningful solution. We confine them to a circle of radius $$R$$ by imposing $$N$$ constraints,

$|\mathrm r_i| \leq R \ , \quad \text{for all} \quad i = 0 \ldots N-1 \ .$

These are the required imports for this example:

In :

import numpy as np
import matplotlib.pyplot as plt
from iwopy import Problem, Objective, Constraint
from iwopy.interfaces.pymoo import Optimizer_pymoo


We start by creating a specific class that describes the variables of our problem:

In :

class ChargesProblem(Problem):
super().__init__(name="charges_problem")
self.xy_init = xy_init
self.n_charges = len(xy_init)

def var_names_float(self):
"""Defines the variable names"""
vnames = []
for i in range(self.n_charges):
vnames += [f"x{i}", f"y{i}"]
return vnames

def initial_values_float(self):
"""Returns inital values, as given to constructor"""
return self.xy_init.reshape(2 * self.n_charges)

def min_values_float(self):
"""Minimal values for a square of size 2*radius"""

def max_values_float(self):
"""Maximal values for a square of size 2*radius"""

def apply_individual(self, vars_int, vars_float):
"""Returns (x, y) variables for each charge"""
return vars_float.reshape(self.n_charges, 2)

def apply_population(self, vars_int, vars_float):
"""Returns (x, y) variables for each charge per individual"""
n_pop = len(vars_float)
return vars_float.reshape(n_pop, self.n_charges, 2)

def get_fig(self, xy):
"""Get a figure showing the charges locations"""
fig, ax = plt.subplots(figsize=(6, 6))
ax.scatter(xy[:, 0], xy[:, 1], color="orange")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.set_title(f"N = {self.n_charges}")
return fig


There are $$2 N$$ variables for this problem, and we order them as $$(x_0, y_0, \ldots, x_{N-1}, y_{N-1})$$. This is convenient, since the numpy reshaping operation vars_float.reshape(n_charges, 2) then represents an array of $$(x_i, y_i)$$ type arrays, which is handy for calculations.

Notice the two functions apply_individual and apply_population. They are being called during optimization whenever new variables are being set by the optimizer, and their purpose is to update the data in the problem class accordingly (excluding objectives and constraints, which we will deal with shortly).

The difference between the two functions is that apply_individual evaluates a single vector of new problem variables, whereas apply_population handles a full population of such vectors. The implementation of the latter function is not strictly required (alternatively a loop over individuals of the population will be evaluated), but it provides a chance for a vast speed-up by vectorized evaluation. This is particularly beneficial for genetic algorithms or other easily vectorizable approaches (in fact also the gradient calculation via iwopy discretizations makes use of this vectorization). Of course the chosen optimizer must be able to support vectorized evaluation for this to work. Both functions return any kind of evaluation data which will be forwarded to the objective and constraint evaluation.

So far we have only defined the problem variables. Objectives and constraints can be added to the problem via the add_objective and add_constraint functions, respectively. First, we implement the inverse distance potential as our objective:

In :

class MinPotential(Objective):
def __init__(self, problem):
"""Define the same variable names and ordering as in the problem"""
super().__init__(problem, "potential", vnames_float=problem.var_names_float())
self.n_charges = problem.n_charges

def n_components(self):
"""The potential is a scalar function, hence one component"""
return 1

def maximize(self):
"""Indicates that the single component is to be minimized"""
return [False]

def calc_individual(self, vars_int, vars_float, problem_results, components=None):
"""This evaluates the potential. See problem.apply_individual
for problem results"""
xy = problem_results
value = 0.0
for i in range(1, self.n_charges):
dist = np.maximum(np.linalg.norm(xy[i - 1, None] - xy[i:], axis=-1), 1e-10)
value += 2 * np.sum(1 / dist)
return value

def calc_population(self, vars_int, vars_float, problem_results, components=None):
"""This evaluates the potential in vectorized manner.
See problem.apply_population for problem results"""
xy = problem_results
n_pop = len(xy)
value = np.zeros((n_pop, 1))
for i in range(1, self.n_charges):
dist = np.maximum(
np.linalg.norm(xy[:, i - 1, None] - xy[:, i:], axis=-1), 1e-10
)
value[:, 0] += 2 * np.sum(1 / dist, axis=1)
return value


Notice that the functions calc_individual and its (again optional) vectorized correspondent calc_population do not directly make use of the variables vector vars_float, which they could (and it would be perfectly fine, even intended), but instead of the problem_results provided by the problem functions apply_individual and apply_population.

Next, we implement the $$N$$ radial constraints:

In :

class MaxRadius(Constraint):
def __init__(self, problem, tol=1e-2):
"""Define the same variable names and ordering as in the problem"""
super().__init__(
)
self.n_charges = problem.n_charges

def n_components(self):
"""One constraint per charge"""
return self.n_charges

def vardeps_float(self):
"""Boolean array that defines which component depends
on which variable (optional, default is on all)"""
deps = np.zeros((self.n_components(), self.n_charges, 2), dtype=bool)
np.fill_diagonal(deps[..., 0], True)
np.fill_diagonal(deps[..., 1], True)
return deps.reshape(self.n_components(), 2 * self.n_charges)

def calc_individual(self, vars_int, vars_float, problem_results, components=None):
"""This evaluates the constraints, negative values are valid.
See problem.apply_individual for problem results"""
components = np.s_[:] if components is None else components
xy = problem_results[components]
r = np.linalg.norm(xy, axis=-1)

def calc_population(self, vars_int, vars_float, problem_results, components=None):
"""This evaluates the constraints in vectorized manner,
negative values are valid. See problem.apply_population for
problem results"""
components = np.s_[:] if components is None else components
xy = problem_results[:, components]
r = np.linalg.norm(xy, axis=-1)


Note that by default, negative constraint values represent validity. This can be changed by overloading the function get_bounds of the base class, see API, but is not recommended.

Also note that the function vardeps_float is only relevant for gradient based solvers that support gradient sparsity (e.g. pygmo.ipopt). It is irrelevant for the genetic algorithm that we want to use in this example, but added for the sake of completeness.

Now let’s add the objective and the constraints and initialize the problem:

In :

N = 20
R = 5

# generate random initial coordinates of the N charges:
xy = np.random.uniform(-R / 10.0, R / 10.0, (N, 2))

problem = ChargesProblem(xy, R)
problem.initialize()

fig = problem.get_fig(xy)
plt.show()

Problem 'charges_problem' (ChargesProblem): Initializing
--------------------------------------------------------
n_vars_int   : 0
n_vars_float : 40
--------------------------------------------------------
n_objectives : 1
n_obj_cmptns : 1
--------------------------------------------------------
n_constraints: 1
n_con_cmptns : 20
-------------------------------------------------------- Finally, we can now setup the genetic algorithm GA from pymoo and solve the problem, here in vectorized form (flag vectorize=True):

In :

solver = Optimizer_pymoo(
problem,
problem_pars=dict(
vectorize=True,
),
algo_pars=dict(
type="GA",
pop_size=100,
seed=1,
),
setup_pars=dict(),
term_pars=dict(
type="default",
n_max_gen=300,
ftol=1e-6,
xtol=1e-6,
),
)
solver.initialize()
solver.print_info()

results = solver.solve(verbosity=0)
solver.finalize(results)

Initializing Optimizer_pymoo
Selecting sampling: float_random (FloatRandomSampling)
Selecting algorithm: GA (GA)
Selecting termination: default (DefaultSingleObjectiveTermination)

Problem:
--------
vectorize: True

Algorithm:
----------
type: GA
pop_size: 100
seed: 1

Termination:
------------
n_max_gen: 300
ftol: 1e-06
xtol: 1e-06

Optimizer_pymoo: Optimization run finished
Success: True
Best potential = 78.10126626778782

In :

print(results)

xy = results.problem_results
fig = problem.get_fig(xy)
plt.show()

Results problem 'charges_problem':
-----------------------------------
Float variables:
0: x0  = 2.347893e+00
1: y0  = -1.306908e+00
2: x1  = 4.755656e+00
3: y1  = 1.538807e+00
4: x2  = -9.254691e-01
5: y2  = -4.913092e+00
6: x3  = -1.093399e+00
7: y3  = -1.472314e+00
8: x4  = -4.727735e+00
9: y4  = -1.622439e+00
10: x5  = -1.249640e+00
11: y5  = 4.841101e+00
12: x6  = -1.741530e+00
13: y6  = 1.573486e+00
14: x7  = -4.400393e+00
15: y7  = 2.373510e+00
16: x8  = 2.211133e+00
17: y8  = 4.478829e+00
18: x9  = -3.887042e+00
19: y9  = -3.142983e+00
20: x10 = 1.681425e+00
21: y10 = 1.576298e+00
22: x11 = 2.432938e+00
23: y11 = -4.366588e+00
24: x12 = 5.763072e-01
25: y12 = -4.966442e+00
26: x13 = -4.982547e+00
27: y13 = 4.135544e-01
28: x14 = 4.925992e+00
29: y14 = -8.455471e-01
30: x15 = 4.396135e-01
31: y15 = 4.980480e+00
32: x16 = -2.518958e+00
33: y16 = -4.316364e+00
34: x17 = 3.941714e+00
35: y17 = -3.072407e+00
36: x18 = -3.025664e+00
37: y18 = 3.976280e+00
38: x19 = 3.691092e+00
39: y19 = 3.370534e+00
-----------------------------------
Objectives:
0: potential = 7.810127e+01
-----------------------------------
Constraints:
-----------------------------------
Success: True
----------------------------------- Note that this problem has many equivalent solutions, and the displayed result is the best solution that the genetic algorithm was able to find within its convergence criteria. With heuristic methods there is generally no guarantee or proof that a global optimum has been found. However, the objective function has clearly been minimized, and therefore the “design” was improved which is often the goal in engineering tasks.

For the fun of it, let’s also solve the problem using iwopy’s GG solver (stands for Greedy Gradient), now for 50 charges. This solver is a simple steepest decent algorithm which projects out directions that would violate constraints (and reverts those that already do). It supports iwopy’s vectorization capabilities and hence can run fast for many variables. Of course it is not as advanced as other gradient based solvers, but for this particular problem it does a good job:

In :

from iwopy import LocalFD
from iwopy.optimizers import GG

In :

N = 50
R = 5

xy = np.random.uniform(-R / 10.0, R / 10.0, (N, 2))

problem = ChargesProblem(xy, R)

gproblem = LocalFD(problem, deltas=1e-2, fd_order=1)
gproblem.initialize()

solver = GG(
gproblem,
step_max=0.1,
step_min=1e-4,
vectorized=True,
)
solver.initialize()

results = solver.solve(verbosity=0)
solver.finalize(results)
print(results)

Problem 'charges_problem' (ChargesProblem): Initializing
--------------------------------------------------------
n_vars_int   : 0
n_vars_float : 100
--------------------------------------------------------
n_objectives : 1
n_obj_cmptns : 1
--------------------------------------------------------
n_constraints: 1
n_con_cmptns : 50
--------------------------------------------------------
Problem 'charges_problem_fd' (LocalFD): Initializing
----------------------------------------------------
n_vars_int   : 0
n_vars_float : 100
----------------------------------------------------
n_objectives : 1
n_obj_cmptns : 1
----------------------------------------------------
n_constraints: 1
n_con_cmptns : 50
----------------------------------------------------
GG: Optimization run finished
Success: True
Best potential = 586.5723344400639
Results problem 'charges_problem_fd':
--------------------------------------
Float variables:
0: x0  = 4.000382e+00
1: y0  = 2.999490e+00
2: x1  = 5.079732e-01
3: y1  = 3.691573e+00
4: x2  = 2.114143e+00
5: y2  = -3.098944e+00
6: x3  = 3.277208e+00
7: y3  = 1.723608e+00
8: x4  = -4.495711e+00
9: y4  = -2.188282e+00
10: x5  = -1.348944e+00
11: y5  = -3.464171e+00
12: x6  = 3.340019e+00
13: y6  = -3.720789e+00
14: x7  = -2.457939e+00
15: y7  = -4.354140e+00
16: x8  = -7.376010e-01
17: y8  = 4.945295e+00
18: x9  = -4.996095e+00
19: y9  = -1.975738e-01
20: x10 = 4.553684e+00
21: y10 = 2.064937e+00
22: x11 = 5.000000e+00
23: y11 = 1.718687e-03
24: x12 = 1.281237e+00
25: y12 = -1.583603e+00
26: x13 = 2.468452e+00
27: y13 = -4.348189e+00
28: x14 = -4.182320e+00
29: y14 = 2.740109e+00
30: x15 = -3.275141e+00
31: y15 = -3.778023e+00
32: x16 = -4.932686e+00
33: y16 = 8.176837e-01
34: x17 = -1.193985e+00
35: y17 = 3.486114e+00
36: x18 = 5.148106e-01
37: y18 = -4.973427e+00
38: x19 = 4.561294e+00
39: y19 = -2.048072e+00
40: x20 = 8.243842e-01
41: y20 = 1.920188e+00
42: x21 = 4.885821e+00
43: y21 = 1.062429e+00
44: x22 = -3.620277e+00
45: y22 = -8.168378e-01
46: x23 = -7.371890e-01
47: y23 = -1.847337e+00
48: x24 = 4.031678e+00
49: y24 = -2.957291e+00
50: x25 = 4.886153e+00
51: y25 = -1.060901e+00
52: x26 = -5.058800e-01
53: y26 = -4.974342e+00
54: x27 = 3.272922e+00
55: y27 = 3.779945e+00
56: x28 = 2.361521e+00
57: y28 = 4.407178e+00
58: x29 = -4.654690e+00
59: y29 = 1.825885e+00
60: x30 = -4.847042e+00
61: y30 = -1.227270e+00
62: x31 = 2.198517e+00
63: y31 = 3.050567e+00
64: x32 = -2.726291e+00
65: y32 = -2.324120e+00
66: x33 = 1.523045e+00
67: y33 = -4.762387e+00
68: x34 = 3.740160e-02
69: y34 = 3.831777e-02
70: x35 = 3.231579e+00
71: y35 = -1.691136e+00
72: x36 = 3.122392e-01
73: y36 = 4.990241e+00
74: x37 = -1.199956e+00
75: y37 = 1.655533e+00
76: x38 = -1.763976e+00
77: y38 = 4.678503e+00
78: x39 = -3.968261e+00
79: y39 = -3.041860e+00
80: x40 = 2.049042e+00
81: y40 = 3.099094e-01
82: x41 = -2.695721e+00
83: y41 = 4.211067e+00
84: x42 = -1.985593e+00
85: y42 = -2.208243e-01
86: x43 = -3.508881e+00
87: y43 = 9.764948e-01
88: x44 = 4.255267e-01
89: y44 = -3.580911e+00
90: x45 = -1.502724e+00
91: y45 = -4.768839e+00
92: x46 = -2.743736e+00
93: y46 = 2.512271e+00
94: x47 = 3.744293e+00
95: y47 = -3.509546e-02
96: x48 = -3.508379e+00
97: y48 = 3.562482e+00
98: x49 = 1.383104e+00
99: y49 = 4.804896e+00
--------------------------------------
Objectives:
0: potential = 5.865723e+02
--------------------------------------
Constraints:

In :

xy = results.problem_results 