Using a Framework and the ZDT Test Suite

Preamble

In [1]:
# used to create block diagrams
%reload_ext xdiag_magic
%xdiag_output_format svg
    
import numpy as np                   # for multi-dimensional containers
import pandas as pd                  # for DataFrames
import plotly.graph_objects as go    # for data visualisation
import plotly.io as pio              # to set shahin plot layout
import platypus as plat              # multi-objective optimisation framework

pio.templates['shahin'] = pio.to_templated(go.Figure().update_layout(legend=dict(orientation="h",y=1.1, x=.5, xanchor='center'),margin=dict(t=0,r=0,b=40,l=40))).layout.template
pio.templates.default = 'shahin'

Introduction

When preparing to implement multi-objective optimisation experiments, it's often more convenient to use a ready-made framework/library instead of programming everything from scratch. There are many libraries and frameworks that have been implemented in many different programming languages, but as we're using Python we will be selecting from frameworks such as DEAP, PyGMO, and Platypus.

With our focus on multi-objective optimisation, our choice is an easy one. We will choose Platypus which has a focus on multi-objective problems and optimisation.

Platypus is a framework for evolutionary computing in Python with a focus on multiobjective evolutionary algorithms (MOEAs). It differs from existing optimization libraries, including PyGMO, Inspyred, DEAP, and Scipy, by providing optimization algorithms and analysis tools for multiobjective optimization.

As a first look into Platypus, let's repeat the process covered in the earlier section on "Synthetic Objective Functions and ZDT1", where we randomly initialise a solution and then evaluate it using ZDT1.

In [2]:
%%blockdiag
{
    orientation = portrait
    "Problem Variables" -> "Test Function" -> "Objective Values"
    "Test Function" [color = '#ffffcc']
}
blockdiag { orientation = portrait "Problem Variables" -> "Test Function" -> "Objective Values" "Test Function" [color = '#ffffcc'] } Problem VariablesTest FunctionObjective Values

The ZDT test function

Similar to the last time, we will be using a synthetic test problem throughout this notebook called ZDT1. It is part of the ZDT test suite, consisting of six different two-objective synthetic test problems. This is quite an old test suite, easy to solve, and very easy to visualise.

Mathematically, the ZDT11 two-objective test function can be expressed as:

$$ \begin{aligned} f_1(x_1) &= x_1 \tag{1} \\ f_2(x) &= g \cdot h \\ g(x_2,\ldots,x_{\mathrm{D}}) &= 1 + 9 \cdot \sum_{d=2}^{\mathrm{D}} \frac{x_d}{(V-1)}\\ h(f_1,g) &= 1 - \sqrt{f1/g} \end{aligned} $$

where $x$ is a solution to the problem, defined as a vector of $V$ decision variables.

$$ x= \langle x_{1},x_{2},\ldots,x_{\mathrm{D}} \rangle \tag{2} $$

and all decision variables fall between $0$ and $1$.

$$ 0 \le x_d \le 1, d=1,\ldots,\mathrm{D} \tag{3} $$

For this bi-objective test function, $f_1$ is the first objective, and $f_2$ is the second objective. This particular objective function is, by design, scalable up to any number of problem variables but is restricted to two problem objectives.

Let's start implementing this in Python, beginning with the initialisation of a solution according to Equations 2 and 3. In this case, we will have 30 problem variables $\mathrm{D}=30$.

In [3]:
D = 30
x = np.random.rand(D)
print(x)
[0.87237386 0.92808638 0.12822784 0.42461699 0.07676533 0.1059336
 0.10116919 0.98212044 0.50798059 0.18697349 0.57088301 0.0918622
 0.1445064  0.38973214 0.72360086 0.20382492 0.64733888 0.58831445
 0.39389366 0.68950765 0.75862002 0.16861867 0.97931378 0.51856309
 0.43889415 0.24061761 0.06984669 0.12362507 0.50009499 0.59296542]

Now that we have a solution to evaluate, let's implement the ZDT1 synthetic test function using Equation 1.

In [4]:
def ZDT1(x):
    f1 = x[0]  # objective 1
    g = 1 + 9 * np.sum(x[1:D] / (D-1))
    h = 1 - np.sqrt(f1 / g)
    f2 = g * h  # objective 2
    
    return [f1, f2]

Finally, let's invoke our implemented test function using our solution $x$ from earlier.

In [5]:
objective_values = ZDT1(x)
print(objective_values)
[0.8723738604708684, 2.761515792435468]

Now we can see the two objective values that measure our solution $x$ according to the ZDT1 synthetic test function, which is a minimisation problem.

Using a Framework

We've quickly repeated our earlier exercise, where we move from our mathematical description of ZDT1 to an implementation in Python. Now, let's use the Platypus' implementation of ZDT1, which would have saves us from having to implement it in Python ourselves.

We have already imported Platypus as plat above, so to get an instance of ZDT1 all we need to do is use the object constructor.

In [6]:
problem = plat.ZDT1()

Just like that, our variable problem references an instance of the ZDT1 test problem.

Now we need to create a solution in a structure that is defined by Platypus. This solution object is what Platypus expects when performing all of the operations that it provides.

In [7]:
solution = plat.Solution(problem)

By using the Solution() constructor and passing in our earlier instantiated problem, the solution is initialised with the correct number of variables and objectives. We can check this ourselves.

In [8]:
print(f"This solution's variables are set to:\n{solution.variables}")
print(f"This solution has {len(solution.variables)} variables")
This solution's variables are set to:
[None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None]
This solution has 30 variables
In [9]:
print(f"This solution's objectives are set to:\n{solution.objectives}")
print(f"This solution has {len(solution.objectives)} objectives")
This solution's objectives are set to:
[None, None]
This solution has 2 objectives

Earlier in this notebook we randomly generated 30 problem variables and stored them in the variable x. Let's assign this to our variables and check that it works.

In [10]:
solution.variables = x
print(f"This solution's variables are set to:\n{solution.variables}")
This solution's variables are set to:
[0.87237386 0.92808638 0.12822784 0.42461699 0.07676533 0.1059336
 0.10116919 0.98212044 0.50798059 0.18697349 0.57088301 0.0918622
 0.1445064  0.38973214 0.72360086 0.20382492 0.64733888 0.58831445
 0.39389366 0.68950765 0.75862002 0.16861867 0.97931378 0.51856309
 0.43889415 0.24061761 0.06984669 0.12362507 0.50009499 0.59296542]

Now we can invoke the evaluate() method which will use the assigned problem to evaluate the problem variables and calculate the objective values. We can print these out afterwards to see the results.

In [11]:
solution.evaluate()
print(solution.objectives)
[0.8723738604708684, 2.761515792435468]

These objectives should be the same as the ones calculated by our own implementation of ZDT1, within some margin of error.

Conclusion

In this section we introduced a framework for multi-objective optimisation and analysis. We used it to create an instance of the ZDT1 test problem, which we then used to initialise an empty solution. This empty solution was assigned our randomly generated problem variables, and then evaluated according to ZDT1 to calculate our objective values.

Exercise

Using the framework introduced in this section, evaluate a number of randomly generated solutions for ZDT2, ZDT3, ZDT4, and ZDT6.


  1. E. Zitzler, K. Deb, and L. Thiele. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2):173-195, 2000