# Populations in Objective and Decision Space

Contents

## Preamble¶

In [55]:
# used to create block diagrams
%xdiag_output_format svg

import numpy as np                   # for multi-dimensional containers
import pandas as pd                  # for DataFrames
import plotly.graph_objects as go    # for data visualisation
import plotly.io as pio              # to set shahin plot layout

pio.templates['shahin'] = pio.to_templated(go.Figure().update_layout(margin=dict(t=0,r=0,b=40,l=40))).layout.template
pio.templates.default = 'shahin'


## Introduction¶

In single-objective problems, the objective is to find a single candidate solution which represents the global optimum in the entire search space. Multi-objective problems often involve conflicts between multiple objectives, and as a result it is unlikely that there exists a single optimal solution. Therefore, the solution to a multi-objective problem cannot be a single candidate solution, but instead a set of candidate solutions which represent the optimum trade-off surface in the objective space. This set is referred to as an approximation set, as it is an approximation of the theoretical optima.

Through the optimisation process in an Evolutionary Multi-Objective Optimisation Algorithm, a population of solutions is iteratively subjected to various operators until termination, at which point this final population (or a sub-set) is determined to be the approximation set.

In [46]:
%%blockdiag
{
orientation = portrait
Initialisation ->Evaluation -> "Terminate?" ->Selection -> Variation -> Evaluation
}


We can introduce and describe these concepts through a demonstration, by generating a population of decision variables and using the ZDT1 test function (described in earlier sections) as the objective function.

In [16]:
def ZDT1(x):
f1 = x[0] # objective 1
g = 1 + 9 * np.sum(x[1:D] / (D-1))
h = 1 - np.sqrt(f1/g)
f2 = g * h # objective 2

return [f1, f2]


We will these these concepts mathematically and then implemented them using Python.

## Populations in Objective and Decision Space¶

We will begin by defining a population of size $\mathrm{N}=50$, number of decision variables $\mathrm{D}=30$ where each variable falls between 0 and 1 ($x_d \in [0,1]$), and number of objective values $\mathrm{M}=2$.

In [49]:
N = 50
D = 30
D_lower = np.zeros((1, D))
D_upper = np.ones((1, D))
M = 2


Let's define a population $\mathrm{X}$ of $\mathrm{N}$ candidate solutions.

$$\mathbf{X} = \langle \mathrm{X}_1,\mathrm{X}_2,\ldots,\mathrm{X}_{\mathrm{N}}\rangle$$

where $\mathrm{N}$ refers to the number of solutions in the population, $\mathrm{X}_{n}$ refers to the $n$-th solution in the population, and $y_{dn}$ refers to the $d$-th decision variable of the $n$-th solution in the population.

$$\mathrm{X}_{n} = \langle x_{1n},x_{2n},\ldots,x_{\mathrm{D}n} \rangle$$

Let's initialise this population with random decision variables.

In [50]:
X = pd.DataFrame(np.random.uniform(low=D_lower, high=D_upper, size=(N,D)))
X.head(5) # Show only the first 5 solutions

Out[50]:
0 1 2 3 4 5 6 7 8 9 ... 20 21 22 23 24 25 26 27 28 29
0 0.681083 0.529775 0.096822 0.604449 0.768480 0.243406 0.956636 0.029199 0.979766 0.771493 ... 0.971313 0.998869 0.231171 0.821477 0.154853 0.466902 0.069785 0.827801 0.226067 0.828507
1 0.389715 0.992283 0.801073 0.399629 0.685195 0.300237 0.677252 0.569246 0.463169 0.488090 ... 0.651012 0.787553 0.187985 0.081535 0.801847 0.786725 0.234757 0.971415 0.557913 0.791015
2 0.904822 0.108761 0.393159 0.001803 0.845573 0.282934 0.018924 0.535487 0.834753 0.789524 ... 0.197003 0.996313 0.102618 0.166409 0.031638 0.254799 0.302446 0.403836 0.871756 0.199574
3 0.956347 0.907517 0.368565 0.169989 0.000010 0.854048 0.271804 0.011885 0.899716 0.646918 ... 0.029673 0.969572 0.278754 0.660740 0.448692 0.636155 0.006910 0.546494 0.131707 0.618275
4 0.088441 0.713433 0.223105 0.069362 0.990742 0.442697 0.714386 0.462220 0.381068 0.090390 ... 0.350294 0.954266 0.137212 0.549978 0.348956 0.494651 0.147630 0.173815 0.628073 0.807006

5 rows × 30 columns

Similarly, we will define the corresponding objective values $\mathrm{Y}$ of the population $\mathrm{X}$.

$$\mathbf{Y} = \langle \mathrm{Y}_1,\mathrm{Y}_2,\ldots,\mathrm{Y}_{\mathrm{N}}\rangle$$

where $\mathrm{N}$ refers to the number of objective value sets, $\mathrm{Y}_{n}$ refers to the $n$-th set of objective values in the population, and $y_{mn}$ refers to the $m$-th objective value of the $n$-th set of objective values in the population.

$$\mathrm{Y}_{n} = \langle y_{1n},y_{2n},\ldots,y_{\mathrm{M}n} \rangle$$

To generate $Y$, we will evaluate each solution $\mathrm{X}_{\mathrm{N}}$ using the ZDT1() test function defined above.

In [28]:
Y = np.empty((0, 2))

for n in range(N):
y = ZDT1(X.iloc[n])
Y = np.vstack([Y, y])

# convert to DataFrame
Y = pd.DataFrame(Y, columns=['f1','f2'])
Y.head(5) # Shows only first 5 sets of objective values

Out[28]:
f1 f2
0 0.514914 3.616102
1 0.510075 4.239585
2 0.503288 4.416120
3 0.588565 4.579634
4 0.620797 3.622504

Finally, we will visualise each of our 50 solutions in objective space.

In [52]:
fig = go.Figure(layout=dict(xaxis=dict(title='f1'),yaxis=dict(title='f2')))

# This is not how we would normally plot this data.
# Here, it is done this way so that the colours match those in the visualisation below.
for index, row in Y.iterrows():

fig.show()


For completeness, we may also wish to visualise our 50 solutions in the decision space, although it is not very useful at this point.

In [54]:
fig = go.Figure(layout=dict(xaxis=dict(title='decision variables', range=[1, D]),yaxis=dict(title='value')))

for index, row in X.iterrows():