{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Electrostatic potential minimization" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The main purpose of `iwopy` is to be helpful when attacking more complicated optimization tasks than the minimization of simple analytical functions. As an example, we consider the minimization of an inverse distance type potential for a fully coupled system of $N$ particles in two dimensions. We can imagine such a system to be composed of $N$ individual point charges, each of them carrying the same electric unit charge. The potential that we want to minimize is then represented by\n", "$$\n", "V = \\sum_{i\\neq j} \\frac 1 {|\\mathrm r_i - \\mathrm r_j|} \\ , \\quad \\text{where} \\quad \\mathrm r_i = \\left( \\begin{array}{c} x_i \\\\ y_i \\end{array} \\right)\n", "$$\n", "denotes the two dimensional position vector of charge $i$ and the sum is over all unequal index pairs. This potential favours large distances between charges, hence we need to constrain them to a certain area for a meaningful solution. We confine them to a circle of radius $R$ by imposing $N$ constraints,\n", "$$\n", "|\\mathrm r_i| \\leq R \\ , \\quad \\text{for all} \\quad i = 0 \\ldots N-1 \\ .\n", "$$\n", "These are the required imports for this example:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pyplot as plt\n", "from iwopy import Problem, Objective, Constraint\n", "from iwopy.interfaces.pymoo import Optimizer_pymoo" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We start by creating a specific class that describes the variables of our problem:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class ChargesProblem(Problem):\n", " def __init__(self, xy_init, radius):\n", " super().__init__(name=\"charges_problem\")\n", " self.xy_init = xy_init\n", " self.n_charges = len(xy_init)\n", " self.radius = radius\n", "\n", " def var_names_float(self):\n", " \"\"\"Defines the variable names\"\"\"\n", " vnames = []\n", " for i in range(self.n_charges):\n", " vnames += [f\"x{i}\", f\"y{i}\"]\n", " return vnames\n", "\n", " def initial_values_float(self):\n", " \"\"\"Returns inital values, as given to constructor\"\"\"\n", " return self.xy_init.reshape(2 * self.n_charges)\n", "\n", " def min_values_float(self):\n", " \"\"\"Minimal values for a square of size 2*radius\"\"\"\n", " return np.full(2 * self.n_charges, -self.radius)\n", "\n", " def max_values_float(self):\n", " \"\"\"Maximal values for a square of size 2*radius\"\"\"\n", " return np.full(2 * self.n_charges, self.radius)\n", "\n", " def apply_individual(self, vars_int, vars_float):\n", " \"\"\"Returns (x, y) variables for each charge\"\"\"\n", " return vars_float.reshape(self.n_charges, 2)\n", "\n", " def apply_population(self, vars_int, vars_float):\n", " \"\"\"Returns (x, y) variables for each charge per individual\"\"\"\n", " n_pop = len(vars_float)\n", " return vars_float.reshape(n_pop, self.n_charges, 2)\n", "\n", " def get_fig(self, xy):\n", " \"\"\"Get a figure showing the charges locations\"\"\"\n", " fig, ax = plt.subplots(figsize=(6, 6))\n", " ax.scatter(xy[:, 0], xy[:, 1], color=\"orange\")\n", " ax.add_patch(plt.Circle((0, 0), self.radius, color=\"darkred\", fill=False))\n", " ax.set_aspect(\"equal\", adjustable=\"box\")\n", " ax.set_xlabel(\"x\")\n", " ax.set_ylabel(\"y\")\n", " ax.set_title(f\"N = {self.n_charges}\")\n", " return fig" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "There are $2 N$ variables for this problem, and we order them as $(x_0, y_0, \\ldots, x_{N-1}, y_{N-1})$. This is convenient, since the numpy reshaping operation `vars_float.reshape(n_charges, 2)` then represents an array of $(x_i, y_i)$ type arrays, which is handy for calculations.\n", "\n", "Notice the two functions `apply_individual` and `apply_population`. They are being called during optimization whenever new variables are being set by the optimizer, and their purpose is to update the data in the problem class accordingly (excluding objectives and constraints, which we will deal with shortly).\n", "\n", "The difference between the two functions is that `apply_individual` evaluates a single vector of new problem variables, whereas `apply_population` handles a full population of such vectors. The implementation of the latter function is not strictly required (alternatively a loop over individuals of the population will be evaluated), but it provides a chance for a vast speed-up by vectorized evaluation. This is particularly beneficial for genetic algorithms or other easily vectorizable approaches (in fact also the gradient calculation via `iwopy` discretizations makes use of this vectorization). Of course the chosen optimizer must be able to support vectorized evaluation for this to work. Both functions return any kind of evaluation data which will be forwarded to the objective and constraint evaluation.\n", "\n", "So far we have only defined the problem variables. Objectives and constraints can be added to the problem via the `add_objective` and `add_constraint` functions, respectively. First, we implement the inverse distance potential as our objective:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class MinPotential(Objective):\n", " def __init__(self, problem):\n", " \"\"\"Define the same variable names and ordering as in the problem\"\"\"\n", " super().__init__(problem, \"potential\", vnames_float=problem.var_names_float())\n", " self.n_charges = problem.n_charges\n", "\n", " def n_components(self):\n", " \"\"\"The potential is a scalar function, hence one component\"\"\"\n", " return 1\n", "\n", " def maximize(self):\n", " \"\"\"Indicates that the single component is to be minimized\"\"\"\n", " return [False]\n", "\n", " def calc_individual(self, vars_int, vars_float, problem_results, components=None):\n", " \"\"\"This evaluates the potential. See problem.apply_individual\n", " for problem results\"\"\"\n", " xy = problem_results\n", " value = 0.0\n", " for i in range(1, self.n_charges):\n", " dist = np.maximum(np.linalg.norm(xy[i - 1, None] - xy[i:], axis=-1), 1e-10)\n", " value += 2 * np.sum(1 / dist)\n", " return value\n", "\n", " def calc_population(self, vars_int, vars_float, problem_results, components=None):\n", " \"\"\"This evaluates the potential in vectorized manner.\n", " See problem.apply_population for problem results\"\"\"\n", " xy = problem_results\n", " n_pop = len(xy)\n", " value = np.zeros((n_pop, 1))\n", " for i in range(1, self.n_charges):\n", " dist = np.maximum(\n", " np.linalg.norm(xy[:, i - 1, None] - xy[:, i:], axis=-1), 1e-10\n", " )\n", " value[:, 0] += 2 * np.sum(1 / dist, axis=1)\n", " return value" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Notice that the functions `calc_individual` and its (again optional) vectorized correspondent `calc_population` do not directly make use of the variables vector `vars_float`, which they could (and it would be perfectly fine, even intended), but instead of the `problem_results` provided by the problem functions `apply_individual` and `apply_population`.\n", "\n", "Next, we implement the $N$ radial constraints:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "class MaxRadius(Constraint):\n", " def __init__(self, problem, tol=1e-2):\n", " \"\"\"Define the same variable names and ordering as in the problem\"\"\"\n", " super().__init__(\n", " problem, \"radius\", vnames_float=problem.var_names_float(), tol=tol\n", " )\n", " self.n_charges = problem.n_charges\n", " self.radius = problem.radius\n", "\n", " def n_components(self):\n", " \"\"\"One constraint per charge\"\"\"\n", " return self.n_charges\n", "\n", " def vardeps_float(self):\n", " \"\"\"Boolean array that defines which component depends\n", " on which variable (optional, default is on all)\"\"\"\n", " deps = np.zeros((self.n_components(), self.n_charges, 2), dtype=bool)\n", " np.fill_diagonal(deps[..., 0], True)\n", " np.fill_diagonal(deps[..., 1], True)\n", " return deps.reshape(self.n_components(), 2 * self.n_charges)\n", "\n", " def calc_individual(self, vars_int, vars_float, problem_results, components=None):\n", " \"\"\"This evaluates the constraints, negative values are valid.\n", " See problem.apply_individual for problem results\"\"\"\n", " components = np.s_[:] if components is None else components\n", " xy = problem_results[components]\n", " r = np.linalg.norm(xy, axis=-1)\n", " return r - self.radius\n", "\n", " def calc_population(self, vars_int, vars_float, problem_results, components=None):\n", " \"\"\"This evaluates the constraints in vectorized manner,\n", " negative values are valid. See problem.apply_population for\n", " problem results\"\"\"\n", " components = np.s_[:] if components is None else components\n", " xy = problem_results[:, components]\n", " r = np.linalg.norm(xy, axis=-1)\n", " return r - self.radius" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that by default, negative constraint values represent validity. This can be changed by overloading the function `get_bounds` of the base class, see API, but is not recommended.\n", "\n", "Also note that the function `vardeps_float` is only relevant for gradient based solvers that support gradient sparsity (e.g. `pygmo.ipopt`). It is irrelevant for the genetic algorithm that we want to use in this example, but added for the sake of completeness.\n", "\n", "Now let's add the objective and the constraints and initialize the problem:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "N = 20\n", "R = 5\n", "\n", "# generate random initial coordinates of the N charges:\n", "xy = np.random.uniform(-R / 10.0, R / 10.0, (N, 2))\n", "\n", "problem = ChargesProblem(xy, R)\n", "problem.add_objective(MinPotential(problem))\n", "problem.add_constraint(MaxRadius(problem))\n", "problem.initialize()\n", "\n", "fig = problem.get_fig(xy)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Finally, we can now setup the genetic algorithm [GA from pymoo](https://pymoo.org/algorithms/soo/ga.html) and solve the problem, here in vectorized form (flag `vectorize=True`):" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "solver = Optimizer_pymoo(\n", " problem,\n", " problem_pars=dict(\n", " vectorize=True,\n", " ),\n", " algo_pars=dict(\n", " type=\"GA\",\n", " pop_size=100, \n", " seed=1,\n", " ),\n", " setup_pars=dict(),\n", " term_pars=dict(\n", " type=\"default\",\n", " n_max_gen=300,\n", " ftol=1e-6,\n", " xtol=1e-6,\n", " ),\n", ")\n", "solver.initialize()\n", "solver.print_info()\n", "\n", "results = solver.solve(verbosity=0)\n", "solver.finalize(results)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(results)\n", "\n", "xy = results.problem_results\n", "fig = problem.get_fig(xy)\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that this problem has many equivalent solutions, and the displayed result is the best solution that the genetic algorithm was able to find within its convergence criteria. With heuristic methods there is generally no guarantee or proof that a global optimum has been found. However, the objective function has clearly been minimized, and therefore the \"design\" was improved which is often the goal in engineering tasks.\n", "\n", "For the fun of it, let's also solve the problem using iwopy's `GG` solver (stands for _Greedy Gradient_), now for 50 charges. This solver is a simple steepest decent algorithm which projects out directions that would violate constraints (and reverts those that already do). It supports iwopy's vectorization capabilities and hence can run fast for many variables. Of course it is not as advanced as other gradient based solvers, but for this particular problem it does a good job:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from iwopy import LocalFD\n", "from iwopy.optimizers import GG" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "N = 50\n", "R = 5\n", "\n", "xy = np.random.uniform(-R / 10.0, R / 10.0, (N, 2))\n", "\n", "problem = ChargesProblem(xy, R)\n", "problem.add_objective(MinPotential(problem))\n", "problem.add_constraint(MaxRadius(problem))\n", "\n", "gproblem = LocalFD(problem, deltas=1e-2, fd_order=1)\n", "gproblem.initialize()\n", "\n", "solver = GG(\n", " gproblem,\n", " step_max=0.1,\n", " step_min=1e-4,\n", " vectorized=True,\n", ")\n", "solver.initialize()\n", "\n", "results = solver.solve(verbosity=0)\n", "solver.finalize(results)\n", "print(results)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "xy = results.problem_results\n", "fig = problem.get_fig(xy)\n", "plt.show()" ] } ], "metadata": { "kernelspec": { "display_name": "iwopyi", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.10 (default, Nov 14 2022, 12:59:47) \n[GCC 9.4.0]" }, "vscode": { "interpreter": { "hash": "0e385321928b3b4f0753cf8f84cadb34ba8fa899d98e78ebdf797f13a2c801ba" } } }, "nbformat": 4, "nbformat_minor": 2 }