diff --git a/docs/source/algorithms/moo/rvea.ipynb b/docs/source/algorithms/moo/rvea.ipynb
index 9adb4fa71..cbd3df762 100644
--- a/docs/source/algorithms/moo/rvea.ipynb
+++ b/docs/source/algorithms/moo/rvea.ipynb
@@ -3,6 +3,9 @@
{
"cell_type": "raw",
"metadata": {
+ "pycharm": {
+ "name": "#%% raw\n"
+ },
"raw_mimetype": "text/restructuredtext"
},
"source": [
@@ -11,21 +14,33 @@
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"# RVEA: Reference Vector Guided Evolutionary Algorithm"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"The algorithm is implemented based on . In RVEA, a scalarization approach, termed angle penalized distance (APD), is adopted to balance the convergence and diversity of the solutions in the high-dimensional objective space. Furthermore, an adaptation strategy is proposed to dynamically adjust the reference vectors' distribution according to the objective functions' scales. An illustration of the APD is shown below:"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"
\n",
"
\n",
@@ -34,7 +49,11 @@
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"### Example"
]
@@ -42,6 +61,9 @@
{
"cell_type": "raw",
"metadata": {
+ "pycharm": {
+ "name": "#%% raw\n"
+ },
"raw_mimetype": "text/restructuredtext"
},
"source": [
@@ -61,6 +83,9 @@
"iopub.status.idle": "2022-07-30T17:29:20.876415Z",
"shell.execute_reply": "2022-07-30T17:29:20.875689Z"
},
+ "pycharm": {
+ "name": "#%%\n"
+ },
"tags": []
},
"outputs": [],
@@ -84,14 +109,18 @@
" verbose=False)\n",
"\n",
"plot = Scatter()\n",
- "plot.add(problem.pareto_front(ref_dirs), plot_type=\"line\", color=\"black\", alpha=0.7)\n",
+ "plot.add(problem.pareto_front(ref_dirs), plot_type=\"surface\", color=\"black\", alpha=0.7)\n",
"plot.add(res.F, color=\"red\")\n",
"plot.show()"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"### API"
]
@@ -99,6 +128,9 @@
{
"cell_type": "raw",
"metadata": {
+ "pycharm": {
+ "name": "#%% raw\n"
+ },
"raw_mimetype": "text/restructuredtext"
},
"source": [
diff --git a/docs/source/problems/parallelization.ipynb b/docs/source/problems/parallelization.ipynb
index a6c2326de..5599a1704 100644
--- a/docs/source/problems/parallelization.ipynb
+++ b/docs/source/problems/parallelization.ipynb
@@ -3,6 +3,9 @@
{
"cell_type": "raw",
"metadata": {
+ "pycharm": {
+ "name": "#%% raw\n"
+ },
"raw_mimetype": "text/restructuredtext"
},
"source": [
@@ -11,14 +14,22 @@
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"# Parallelization"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"In practice, parallelization is essential and can significantly speed up optimization. \n",
"For population-based algorithms, the evaluation of a set of solutions can be parallelized easily \n",
@@ -27,7 +38,11 @@
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"## Vectorized Matrix Operations\n",
"\n",
@@ -39,11 +54,14 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
"import numpy as np\n",
- "\n",
"from pymoo.core.problem import Problem\n",
"\n",
"class MyProblem(Problem):\n",
@@ -52,12 +70,18 @@
" super().__init__(n_var=10, n_obj=1, n_ieq_constr=0, xl=-5, xu=5, **kwargs)\n",
"\n",
" def _evaluate(self, x, out, *args, **kwargs):\n",
- " out[\"F\"] = np.sum(x ** 2, axis=1) "
+ " out[\"F\"] = np.sum(x ** 2, axis=1)\n",
+ "\n",
+ "problem = MyProblem()"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"The `axis=1` operation parallelizes the sum of the matrix directly using an efficient NumPy operation."
]
@@ -65,19 +89,27 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
"from pymoo.algorithms.soo.nonconvex.ga import GA\n",
"from pymoo.optimize import minimize\n",
"\n",
- "res = minimize(MyProblem(), GA())\n",
+ "res = minimize(problem, GA(), termination=(\"n_gen\", 200), seed=1)\n",
"print('Threads:', res.exec_time)"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"## Starmap Interface\n",
"\n",
@@ -92,7 +124,11 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
"from pymoo.core.problem import ElementwiseProblem\n",
@@ -108,14 +144,22 @@
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"Then, we can pass a `starmap` object to be used for parallelization."
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"### Threads"
]
@@ -133,44 +177,33 @@
},
"outputs": [],
"source": [
- "from pymoo.core.problem import starmap_parallelized_eval\n",
"from multiprocessing.pool import ThreadPool\n",
+ "from pymoo.core.problem import StarmapParallelization\n",
+ "from pymoo.algorithms.soo.nonconvex.ga import GA\n",
+ "from pymoo.optimize import minimize\n",
"\n",
- "# the number of threads to be used\n",
- "n_threads = 8\n",
"\n",
- "# initialize the pool\n",
+ "# initialize the thread pool and create the runner\n",
+ "n_threads = 4\n",
"pool = ThreadPool(n_threads)\n",
+ "runner = StarmapParallelization(pool.starmap)\n",
"\n",
"# define the problem by passing the starmap interface of the thread pool\n",
- "problem = MyProblem(runner=pool.starmap, func_eval=starmap_parallelized_eval)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "from pymoo.algorithms.soo.nonconvex.pso import PSO\n",
- "from pymoo.optimize import minimize\n",
+ "problem = MyProblem(elementwise_runner=runner)\n",
+ "\n",
+ "res = minimize(problem, GA(), termination=(\"n_gen\", 200), seed=1)\n",
+ "print('Threads:', res.exec_time)\n",
"\n",
- "res = minimize(problem, PSO(), seed=1, n_gen=100)\n",
- "print('Threads:', res.exec_time)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
"pool.close()"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"### Processes"
]
@@ -189,49 +222,53 @@
"outputs": [],
"source": [
"import multiprocessing\n",
+ "from pymoo.algorithms.soo.nonconvex.ga import GA\n",
+ "from pymoo.optimize import minimize\n",
"\n",
- "# the number of processes to be used\n",
+ "\n",
+ "# initialize the thread pool and create the runner\n",
"n_proccess = 8\n",
"pool = multiprocessing.Pool(n_proccess)\n",
- "problem = MyProblem(runner=pool.starmap, func_eval=starmap_parallelized_eval)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "res = minimize(problem, PSO(), seed=1, n_gen=100)\n",
- "print('Processes:', res.exec_time)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
+ "runner = StarmapParallelization(pool.starmap)\n",
+ "\n",
+ "# define the problem by passing the starmap interface of the thread pool\n",
+ "problem = MyProblem(elementwise_runner=runner)\n",
+ "\n",
+ "res = minimize(problem, GA(), termination=(\"n_gen\", 200), seed=1)\n",
+ "print('Threads:', res.exec_time)\n",
+ "\n",
"pool.close()"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"**Note:** Here clearly the overhead of serializing and transfer the data are visible."
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"## Dask"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"More advanced is to distribute the evaluation function to a couple of workers. There exists a couple of frameworks that support the distribution of code. For our framework, we recommend using [Dask](https://dask.org).\n",
"\n",
@@ -241,74 +278,86 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
+ "from pymoo.algorithms.soo.nonconvex.ga import GA\n",
+ "from pymoo.optimize import minimize\n",
+ "from pymoo.core.problem import DaskParallelization\n",
+ "\n",
"from dask.distributed import Client\n",
"client = Client()\n",
"client.restart()\n",
- "print(\"STARTED\")"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "import numpy as np\n",
- "from pymoo.core.problem import ElementwiseProblem, dask_parallelized_eval\n",
+ "print(\"DASK STARTED\")\n",
"\n",
- "from dask.distributed import Client\n",
- "client = Client()\n",
+ "# initialize the thread pool and create the runner\n",
+ "runner = DaskParallelization(client)\n",
"\n",
- "# create the problem and set the parallelization to dask\n",
- "problem = MyProblem(runner=client, func_eval=dask_parallelized_eval)\n",
+ "# define the problem by passing the starmap interface of the thread pool\n",
+ "problem = MyProblem(elementwise_runner=runner)\n",
"\n",
- "res = minimize(problem, PSO(), seed=1, n_gen=100)\n",
- "print('Dask:', res.exec_time)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "client.close()"
+ "res = minimize(problem, GA(), termination=(\"n_gen\", 200), seed=1)\n",
+ "print('Threads:', res.exec_time)\n",
+ "\n",
+ "client.close()\n",
+ "print(\"DASK SHUTDOWN\")"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"**Note:** Here, the overhead of transferring data to the workers of Dask is dominating. However, if your problem is computationally more expensive, this shall not be the case anymore."
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"## Custom Parallelization"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"If you need more control over the parallelization process, we like to provide an example of fully customizable parallelization. The `_evaluate` function gets the whole set of solutions to be evaluated because, by default, `elementwise_evaluation` is disabled."
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"### Threads"
]
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"Thus, a thread pool can be initialized in the constructor of the `Problem` class and then be used to speed up the evaluation.\n",
"The code below basically does what internally happens using the `starmap` interface of *pymoo* directly (with an inline function definition and without some overhead, this is why it is slightly faster)."
@@ -317,7 +366,11 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
"from pymoo.core.problem import Problem\n",
@@ -350,17 +403,25 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
- "res = minimize(problem, PSO(), seed=1, n_gen=100)\n",
+ "res = minimize(problem, GA(), termination=(\"n_gen\", 200), seed=1)\n",
"print('Threads:', res.exec_time)"
]
},
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
"pool.close()"
@@ -368,7 +429,11 @@
},
{
"cell_type": "markdown",
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%% md\n"
+ }
+ },
"source": [
"### Dask"
]
@@ -376,13 +441,16 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
"import numpy as np\n",
"from dask.distributed import Client\n",
"\n",
- "from pymoo.algorithms.soo.nonconvex.ga import GA\n",
"from pymoo.core.problem import Problem\n",
"from pymoo.optimize import minimize\n",
"\n",
@@ -405,12 +473,16 @@
{
"cell_type": "code",
"execution_count": null,
- "metadata": {},
+ "metadata": {
+ "pycharm": {
+ "name": "#%%\n"
+ }
+ },
"outputs": [],
"source": [
"problem = MyProblem()\n",
"\n",
- "res = minimize(problem, PSO(), seed=1, n_gen=100)\n",
+ "res = minimize(problem, GA(), termination=(\"n_gen\", 200), seed=1)\n",
"print('Dask:', res.exec_time)\n",
"\n",
"client.close()"
@@ -433,7 +505,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.3"
+ "version": "3.8.8"
}
},
"nbformat": 4,