Synthetic Objective Functions and ZDT1

Preamble

In [1]:
# used to create block diagrams
%reload_ext xdiag_magic
%xdiag_output_format svg
    
import numpy as np                   # for multi-dimensional containers
import pandas as pd                  # for DataFrames
import plotly.graph_objects as go    # for data visualisation
import plotly.io as pio              # to set shahin plot layout

pio.templates['shahin'] = pio.to_templated(go.Figure().update_layout(legend=dict(orientation="h",y=1.1, x=.5, xanchor='center'),margin=dict(t=0,r=0,b=40,l=40))).layout.template
pio.templates.default = 'shahin'

Introduction

In mathematics, optimisation is concerned with the selection of optimal solutions to objective functions. An objective function consists of input arguments re- ferred to as problem variables (or genotype) which are computed by one or many mathematical functions to determine the objective value (or phenotype).

Real-world optimisation problems are divided into one (in the case of single objective optimisation) or many (in the case of multi-objective optimisation) objective functions in order to be optimised by an optimisation algorithm. The difficulty of convergence can be reduced by the bounding of problem variables as this reduces the size of the search domain.

In order to determine an Evolutionary Algorithm's robustness when solving problems consisting of multiple objectives, its performance must be assessed on the optimisation of synthetic test functions which are created for the purpose of testing. These problems may also be used to systematically compare two or more Evolutionry Algorithms.

In [2]:
%%blockdiag
{
    orientation = portrait
    "Problem Variables" -> "Test Function" -> "Objective Values"
    "Test Function" [color = '#ffffcc']
}
blockdiag { orientation = portrait "Problem Variables" -> "Test Function" -> "Objective Values" "Test Function" [color = '#ffffcc'] } Problem VariablesTest FunctionObjective Values

Synthetic problems test functions are typically:

  • Intentionally difficult, meaning they are designed to include optimisation difficulties which are present in real-world problems.
  • Scalable, meaning they can be configured with a different number of problem variables and objectives.
  • Computationally efficient, meaning they are faster to execute than a real-world problem. This is desirable when benchmarking an Evolutionary Algorithm.

In contrast, real-world problems which have been encapsulated within an objective function in order to be used by an optimiser are often com- putationally expensive and have long execution times. This is because synthetic test functions are often mathematical equations which aim to cause difficulty for an optimiser when searching for problem variables that produce optimal objective values, where as real-world problems often involve computationally expensive simulations in order to arrive at the objective values.

Put simply, using a real-world probem to evaluate the performance of a newly proposed Evolutionary Algorithm only allows us to determine if an algorithm is good at solving that single problem. What we're interested in is analysing how Evolutionary Algorithms perform when encountering various difficulties that appear in multi-objective problems, and how they compare to each other.

The ZDT test function

We will be using a synthetic test problem throughout this notebook called ZDT1. It is part of the ZDT test suite, consisting of six different two-objective synthetic test problems. This is quite an old test suite, easy to solve, and very easy to visualise.

Mathematically, the ZDT11 two-objective test function can be expressed as:

$$ \begin{aligned} f_1(x_1) &= x_1 \tag{1} \\ f_2(x) &= g \cdot h \\ g(x_2,\ldots,x_{\mathrm{D}}) &= 1 + 9 \cdot \sum_{d=2}^{\mathrm{D}} \frac{x_d}{(V-1)}\\ h(f_1,g) &= 1 - \sqrt{f1/g} \end{aligned} $$

where $x$ is a solution to the problem, defined as a vector of $V$ decision variables.

$$ x= \langle x_{1},x_{2},\ldots,x_{\mathrm{D}} \rangle \tag{2} $$

and all decision variables fall between $0$ and $1$.

$$ 0 \le x_d \le 1, d=1,\ldots,\mathrm{D} \tag{3} $$

For this bi-objective test function, $f_1$ is the first objective, and $f_2$ is the second objective. This particular objective function is, by design, scalable up to any number of problem variables but is restricted to two problem objectives.

Let's start implementing this in Python, beginning with the initialisation of a solution according to Equations 2 and 3. In this case, we will have 30 problem variables $\mathrm{D}=30$.

In [3]:
D = 30
x = np.random.rand(D)
print(x)
[0.75061118 0.11255588 0.55607772 0.72423745 0.28283903 0.79219743
 0.832945   0.71704905 0.23929141 0.17819597 0.88446536 0.74151744
 0.24085801 0.89971141 0.25621475 0.77104792 0.62103457 0.53415434
 0.82897093 0.58473926 0.74103955 0.80108694 0.38492988 0.91549616
 0.20457224 0.37212879 0.50717252 0.68057933 0.66811952 0.14805604]

Now that we have a solution to evaluate, let's implement the ZDT1 synthetic test function using Equation 1.

In [4]:
def ZDT1(x):
    f1 = x[0] # objective 1
    g = 1 + 9 * np.sum(x[1:D] / (D-1))
    h = 1 - np.sqrt(f1/g)
    f2 = g * h # objective 2
    
    return [f1, f2]

Finally, let's invoke our implemented test function using our solution $x$ from earlier.

In [5]:
objective_values = ZDT1(x)
print(objective_values)
[0.75061117943887, 3.905968910094397]

Now we can see the two objective values that measure our solution $x$ according to the ZDT1 synthetic test function, which is a minimisation problem.

Performance in Objective Space

We will be discussing desirable characteristics in multi-objective solutions, but for now let's plot some randomly initialised solutions against an optimal set of solutions for ZDT1. This is a synthetic test function, and as such the authors have provided us with a way to calculate the optimal set.

$$ f_2 = 1 - \sqrt{f_1} \tag{4} $$

Let's use this to generate 20 ideal sets of objective values.

In [6]:
true_front = np.empty((0, 2))

for f1 in np.linspace(0, 1, num=20):
    f2 = 1 - np.sqrt(f1)
    true_front = np.vstack([true_front, [f1, f2]])  

# convert to DataFrame
true_front = pd.DataFrame(true_front, columns=['f1','f2'])
true_front
Out[6]:
f1 f2
0 0.000000 1.000000
1 0.052632 0.770584
2 0.105263 0.675557
3 0.157895 0.602640
4 0.210526 0.541169
5 0.263158 0.487011
6 0.315789 0.438049
7 0.368421 0.393023
8 0.421053 0.351114
9 0.473684 0.311753
10 0.526316 0.274524
11 0.578947 0.239114
12 0.631579 0.205281
13 0.684211 0.172830
14 0.736842 0.141605
15 0.789474 0.111477
16 0.842105 0.082337
17 0.894737 0.054095
18 0.947368 0.026671
19 1.000000 0.000000

Now we can plot them to have an idea of the shape of the true front for ZDT1 in objective space.

In [7]:
fig = go.Figure(layout=dict(xaxis=dict(title='f1'),yaxis=dict(title='f2')))

fig.add_scatter(x=true_front.f1, y=true_front.f2, mode='markers')

fig.show()